Here you can see all recent updates to the IACR webpage. These updates are also available:

10
June
2019

Event Calendar
CCSW 2019: The 2019 ACM Cloud Computing Security Workshop
London, UK, 11 November 2019

Event date: 11 November 2019

Submission deadline: 1 August 2019

Notification: 15 August 2019

Submission deadline: 1 August 2019

Notification: 15 August 2019

6
June
2019

ePrint Report
Balance : Dynamic Adjustment of Cryptocurrency Deposits
Dominik Harz, Lewis Gudgeon, Arthur Gervais, William J. Knottenbelt

In cryptoeconomic protocols, financial deposits are fundamental to their security.
Protocol designers and their agents face a trade-off when choosing the deposit size.
While substantial deposits might increase the protocol security, for example by minimising the impact of adversarial behaviour or risks of currency fluctuations, locked-up capital incurs opportunity costs for agents.
Moreover, some protocols require over-collateralization in anticipation of future events and malicious intentions of agents.
We present Balance, an application-agnostic system that reduces over-collateralization without compromising protocol security.
In Balance, malicious agents receive no additional utility for cheating once their deposits are reduced.
At the same time, honest and rational agents increase their utilities for behaving honestly as their opportunity costs for the locked-up deposits are reduced.
Balance is a round-based mechanism in which agents need to continuously perform desired actions.
Rather than treating agents' incentives and behaviour as ancillary, we explicitly model agents' utility, proving the conditions for incentive compatibility.
Balance improves social welfare given a distribution of honest, rational, and malicious agents.
Further, we integrate Balance with a cross-chain interoperability protocol, XCLAIM, reducing deposits by 10% while maintaining the same utility for behaving honestly.
Our implementation allows any number of agents to be maintained for at most 55,287 gas (ca. USD 0.07) to update the agents' scores, and at a cost of 54,948 gas (ca. USD 0.07) to update the assignment of agents to layers.

ePrint Report
Polar Sampler: Discrete Gaussian Sampling over the Integers Using Polar Codes
Jiabo Wang, Cong Ling

Cryptographic constructions based on hard lattice problems have emerged as a front runner for the standardization of post quantum public key cryptography. As the standardization process takes place, optimizing specific parts of proposed schemes becomes a worthwhile endeavor. Gaussian sampling over the integers is one of the fundamental building blocks of latticed-based cryptography. In this work, we propose a new integer Gaussian sampler based on polar codes, dubbed ``polar sampler". The polar sampler is asymptotically information theoretically optimum in the sense that the number of uniformly random bits it uses approaches the entropy bound. It also features quasi-linear complexity and constant-time implementation. Our algorithm becomes effective when sufficiently many samples are required at each query to the sampler. Security analysis is given based on the statistical distance, Kullback-Leibler divergence and R\'enyi divergence. A comparison between the polar sampler and the Knuth-Yao sampler verifies its time efficiency and the memory cost can be further optimized if space-efficient successive-cancellation decoding is adopted.

ePrint Report
A New Approach to Constructing Digital Signature Schemes (Extended Paper)
Ahto Buldas, Denis Firsov, Risto Laanoja, Henri Lakk, Ahto Truu

A new hash-based, server-supported digital signature scheme was proposed recently. We decompose the concept into forward-resistant tags and a generic cryptographic time-stamping service. Based on the decomposition, we propose more tag constructions which allow efficient digital signature schemes with interesting properties to be built. In particular, the new schemes are more suitable for use in personal signing devices, such as smart cards, which are used infrequently. We define the forward-resistant tags formally and prove that (1) the discussed constructs are indeed tags and (2) combining such tags with time-stamping services gives us signature schemes.

ePrint Report
A Blockchain-Assisted Hash-Based Signature Scheme
Ahto Buldas, Risto Laanoja, Ahto Truu

We present a server-supported, hash-based digital signature scheme. To achieve greater efficiency than current state of the art, we relax the security model somewhat. We postulate a set of design requirements, discuss some approaches and their practicality, and finally reach a forward-secure scheme with only modest trust assumptions, achieved by employing the concepts of authenticated data structures and blockchains. The concepts of blockchain authenticated data structures and the presented blockchain design could have independent value and are worth further research.

We present a practical digital signature scheme built from a cryptographic hash function and a hash-then-publish digital time-
stamping scheme. We also provide a simple proof of existential unforgeability against adaptive chosen-message attack (EUF-ACM) in the random oracle (RO) model.

ePrint Report
On designing secure small-state stream ciphers against time-memory-data tradeoff attacks
Vahid Amin Ghafari, Honggang Hu, Fujiang Lin

A new generation of stream ciphers, small-state stream ciphers (SSCs), was born in 2015 with the introduction of the Sprout cipher. The new generation is based on using key bits not only in the initialization but also continuously in the keystream generation phase. The new idea allowed designing stream ciphers with significantly smaller area size and low power consumption. A distinguishing time-memory-data tradeoff (TMDTO) attack was successfully applied against all SSCs in 2017 by Hamann et al. [1]. They suggested using not only key bits but also initial value (IV) bits continuously in the keystream generation phase to strengthen SSCs against TMDTO attacks.
Then, Hamann and Krause [2] proposed a construction based on using only IV bits continuously in packet mode. They suggested an instantiation of an SSC and claimed that it is resistant to TMDTO attacks. We point out that storing IV bits imposes an overhead on cryptosystems that is not acceptable in many applications. More importantly, we show that the proposed SSC remains vulnerable to TMDTO attacks.
To resolve security threat, the current paper proposes constructions, based on storing key or IV bits, that are the first to provide full security against TMDTO attacks. It is possible to obtain parameters for secure SSCs based on these suggested constructions. Our constructions are a fruitful research direction in stream ciphers.

ePrint Report
Related-Key Boomerang Attacks on GIFT with Automated Trail Search Including BCT Effect
Yunwen Liu, Yu Sasaki

In Eurocrypt 2018, Cid et al. proposed a novel notion called the boomerang connectivity table, which formalised the switch property in the middle round of boomerang distinguishers in a unified approach. In this paper, we present a generic model of the boomerang connectivity table with automatic search technique for the first time, and search for (related-key) boomerang distinguishers directly by combining with the search of (related-key) differential characteristics. With the technique, we are able to find 19-round related-key boomerang distinguishers in the lightweight block cipher \textsc{Gift}-64 and \textsc{Gift}-128. Interestingly, a transition that is not predictable by the conventional switches is realised in a boomerang distinguisher predicted by the boomerang connectivity table. In addition, we experimentally extend the 19-round distinguisher by one more round. A 23-round key-recovery attack is presented on \textsc{Gift}-64 based on the distinguisher, which covers more rounds than previous known results in the single-key setting.
Although the designers of \textsc{Gift} do not claim related-key security, bit positions of the key addition and 16-bit rotations were chosen to optimize the related-key differential bound. Indeed, the designers evaluated related-key differential attacks. This is the first work to present better related-key attacks than the simple related-key differential attack.

ePrint Report
New Semi-Free-Start Collision Attack Framework for Reduced RIPEMD-160
Fukang Liu, Christoph Dobraunig, Florian Mendel, Takanori Isobe, Gaoli Wang, Zhenfu Cao

RIPEMD-160 is a hash function published in 1996, which shares similarities with other hash functions designed in this time-period like MD4, MD5 and SHA-1. However, for RIPEMD-160, no (semi-free-start) collision attacks on the full number of steps are known. Hence, it is still used, e.g., to generate Bitcoin addresses together with SHA-256, and is an ISO/IEC standard. Due to its dual-stream structure, even semi-free-start collision attacks starting from the first step only reach 36 steps, which were firstly shown by Mendel et al. at Asiacrypt 2013 and later improved by Liu, Mendel and Wang at Asiacrypt 2017. Both of the attacks are based on a similar freedom degree utilization technique as proposed by Landelle and Peyrin at Eurocrypt 2013. However, the best known semi-free-start collision attack on 36 steps of RIPEMD-160 presented at Asiacrypt 2017 still requires $2^{55.1}$ time and $2^{32}$ memory. Consequently, a practical semi-free-start collision attack for the first 36 steps of RIPEMD-160 still requires a significant amount of resources. Considering the structure of these previous semi-free-start collision attacks for 36 steps of RIPEMD-160, it seems hard to extend it to more steps. Thus, we develop a different semi-free-start collision attack framework for reduced RIPEMD-160 by carefully investigating the message expansion of RIPEMD-160. Our new framework has several advantages. First of all, it allows to extend the attacks to more steps. Second, the memory complexity of the attacks is negligible. Hence, we were able to give a practical semi-free-start collision attack on 36 steps of RIPEMD-160 with time complexity $2^{41}$. Additionally, we describe semi-free-start collision attacks on 37, 38 and 40 (out of 80) steps of RIPEMD-160 with time complexity $2^{49}$, $2^{53}$ and $2^{74.6}$, respectively. To the best of our knowledge, these are the best semi-free-start collision attacks for RIPEMD-160 starting from the first step with respect to the number of steps, including the first practical colliding message pairs for 36 steps of RIPEMD-160.

ePrint Report
PPAD-Hardness via Iterated Squaring Modulo a Composite
Arka Rai Choudhuri, Pavel Hubacek, Chethan Kamath, Krzysztof Pietrzak, Alon Rosen, Guy N. Rothblum

We show that, relative to a random oracle, solving the END-OF-LINE problem (which is PPAD-complete) is no easier than computing the function
\[f(N,x,T) = x^{2^T} \text{mod } N,\]
where $N$ is an $n$-bit RSA modulus, $x\in \mathbb{Z}_N^*$ and $T\in\mathbb{N}$. It was conjectured by Rivest, Shamir and Wagner, that, unless the factorization of $N$ is known, the fastest algorithm for computing $f$ consists of $\Omega(T)$ iterated squaring operations mod $N$. Under a milder assumption, namely that computing $f$ takes $n^{\omega(1)}$ time for some (possibly exponentially) large $T$, our construction of END-OF-LINE cannot be solved in $\text{poly}(n)$ time.

We prove our result by reducing $f$ to (a variant of) the SINK-OF-VERIFIABLE-LINE problem, which is known to imply PPAD (and in fact CLS) hardness. The main building block of our reduction is a recently discovered interactive public-coin proof by Pietrzak for certifying $y=f(N,x,T)$, which can be made non-interactive using (an analogue of) the Fiat-Shamir heuristic. The value $y$ can be computed together with the proof in time $\text{poly}(n)\cdot T$, and the proof can be verified in time $\text{poly}(n) \cdot \text{log} T$. The key technical challenge in our setting is to provide a means by which the solution $y$ together with a proof can be computed in small incremental steps, while the correctness of each intermediate state of this computation can still be verified in time $\text{poly}(n, \text{log} T)$

We prove our result by reducing $f$ to (a variant of) the SINK-OF-VERIFIABLE-LINE problem, which is known to imply PPAD (and in fact CLS) hardness. The main building block of our reduction is a recently discovered interactive public-coin proof by Pietrzak for certifying $y=f(N,x,T)$, which can be made non-interactive using (an analogue of) the Fiat-Shamir heuristic. The value $y$ can be computed together with the proof in time $\text{poly}(n)\cdot T$, and the proof can be verified in time $\text{poly}(n) \cdot \text{log} T$. The key technical challenge in our setting is to provide a means by which the solution $y$ together with a proof can be computed in small incremental steps, while the correctness of each intermediate state of this computation can still be verified in time $\text{poly}(n, \text{log} T)$

ePrint Report
On the Geometric Ergodicity of Metropolis-Hastings Algorithms for Lattice Gaussian Sampling
Zheng Wang, Cong Ling

Sampling from the lattice Gaussian distribution has emerged as an important problem in coding, decoding and cryptography. In this paper, the classic Metropolis-Hastings (MH) algorithm in Markov chain Monte Carlo (MCMC) methods is adopted for lattice Gaussian sampling. Two MH-based algorithms are proposed, which overcome the limitation of Klein's algorithm. The first one, referred to as the independent Metropolis-Hastings-Klein (MHK) algorithm, establishes a Markov chain via an independent proposal distribution. We show that the Markov chain arising from this independent MHK algorithm is uniformly ergodic, namely, it converges to the stationary distribution exponentially fast regardless of the initial state. Moreover, the rate of convergence is analyzed in terms of the theta series, leading to predictable mixing time. A symmetric Metropolis-Klein (SMK) algorithm is also proposed, which is proven to be geometrically ergodic.

ePrint Report
Key Exchange and Authenticated Key Exchange with Reusable Keys Based on RLWE Assumption
Jintai Ding, Pedro Branco, Kevin Schmitt

Key Exchange (KE) is, undoubtedly, one of the most used cryptographic primitives in practice. Its authenticated version, Authenticated Key Exchange (AKE), avoids man-in-the-middle-based attacks by providing authentication for both parties involved. It is widely used on the Internet, in protocols such as TLS or SSH. In this work, we provide new constructions for KE and AKE based on ideal lattices in the Random Oracle Model (ROM). The contributions of this work can be summarized as follows:

1) It is well-known that RLWE-based KE protocols are not robust for key reuses since the signal function leaks information about the secret key. We modify the design of previous RLWE-based KE schemes to allow key reuse in the ROM. Our construction makes use of a new technique called pasteurization which enforces a supposedly RLWE sample sent by the other party to be indeed indistinguishable from a uniform sample and, therefore, ensures no information leakage in the whole KE process.

2) We build a new AKE scheme based on the construction above. The scheme provides implicit authentication (that is, it does not require the use of any other authentication mechanism, like a signature scheme) and it is proven secure in the Bellare-Rogaway model with weak Perfect Forward Secrecy in the ROM. It improves previous designs for AKE schemes based on lattices in several aspects. Our construction just requires sampling from only one discrete Gaussian distribution and avoids rejection sampling and noise flooding techniques, unlike previous proposals (Zhang et al., EUROCRYPT 2015). Thus, the scheme is much more efficient than previous constructions in terms of computational and communication complexity.

Since our constructions are provably secure assuming the hardness of the RLWE problem, they are considered to be robust against quantum adversaries and, thus, suitable for post-quantum applications.

1) It is well-known that RLWE-based KE protocols are not robust for key reuses since the signal function leaks information about the secret key. We modify the design of previous RLWE-based KE schemes to allow key reuse in the ROM. Our construction makes use of a new technique called pasteurization which enforces a supposedly RLWE sample sent by the other party to be indeed indistinguishable from a uniform sample and, therefore, ensures no information leakage in the whole KE process.

2) We build a new AKE scheme based on the construction above. The scheme provides implicit authentication (that is, it does not require the use of any other authentication mechanism, like a signature scheme) and it is proven secure in the Bellare-Rogaway model with weak Perfect Forward Secrecy in the ROM. It improves previous designs for AKE schemes based on lattices in several aspects. Our construction just requires sampling from only one discrete Gaussian distribution and avoids rejection sampling and noise flooding techniques, unlike previous proposals (Zhang et al., EUROCRYPT 2015). Thus, the scheme is much more efficient than previous constructions in terms of computational and communication complexity.

Since our constructions are provably secure assuming the hardness of the RLWE problem, they are considered to be robust against quantum adversaries and, thus, suitable for post-quantum applications.

5
June
2019

ePrint Report
How Diversity Affects Deep-Learning Side-Channel Attacks
Huanyu Wang, Martin Brisfors, Sebastian Forsmark, Elena Dubrova

Deep learning side-channel attacks are an emerging threat to the security of implementations of cryptographic algorithms. The attacker first trains a model on a large set of side-channel traces captured from a chip with a known key. The trained model is then used to recover the unknown key from a few traces captured from a victim chip. The first successful attacks have been demonstrated recently. However, they typically train and test on power traces captured from the same device. In this paper, we show that it is important to train and test on traces captured from different boards and using diverse implementations of the cryptographic algorithm under attack. Otherwise, it is easy to overestimate the classification accuracy. For example, if we train and test an MLP model on power traces captured from the same board, we can recover all key byte values with 96% accuracy from a single trace. However, the single-trace attack accuracy drops to 2.45% if we test on traces captured from a board different from the one we used for training, even if both boards carry identical chips.

ePrint Report
A Note on the (Im)possibility of Verifiable Delay Functions in the Random Oracle Model
Mohammad Mahmoody, Caleb Smith, David J. Wu

Boneh, Bonneau, B{\"u}nz, and Fisch (CRYPTO 2018) recently introduced the notion of a \emph{verifiable delay function} (VDF). VDFs are functions that take a long \emph{sequential} time $T$ to compute, but whose outputs $y \gets Eval(x)$ can be quickly verified (possibly given a proof $\pi$ that is also computed along $Eval(x)$) in time $t \ll T$ (e.g., $t=poly(\lambda, \log T)$ where $\lambda$ is the security parameter). The first security requirement on a VDF asks that no polynomial-time algorithm can find a convincing proof $\pi'$ that verifies for an input $x$ and a different output $y' \neq y$. The second security requirement is that that no polynomial-time algorithm running in \emph{sequential} time $T'<T$ (e.g., $T'=T^{1/10}$) can compute $y$. Starting from the work of Boneh et al., there are now multiple constructions of VDFs from various algebraic assumptions.

In this work, we study whether VDFs can be constructed from ideal hash functions as modeled in the random oracle model (ROM). In the ROM, we measure the running time by the number of oracle queries and the sequentiality by the number of \emph{rounds} of oracle queries it makes. We show that \emph{statistically-unique} VDFs (i.e., where no algorithm can find a convincing different solution $y' \neq y$) cannot be constructed in the ROM. More formally, we give an attacker that finds the solution $y$ in $\approx t$ \emph{rounds} of queries and asking only $poly(T)$ queries in total.

In this work, we study whether VDFs can be constructed from ideal hash functions as modeled in the random oracle model (ROM). In the ROM, we measure the running time by the number of oracle queries and the sequentiality by the number of \emph{rounds} of oracle queries it makes. We show that \emph{statistically-unique} VDFs (i.e., where no algorithm can find a convincing different solution $y' \neq y$) cannot be constructed in the ROM. More formally, we give an attacker that finds the solution $y$ in $\approx t$ \emph{rounds} of queries and asking only $poly(T)$ queries in total.

4
June
2019

Event Calendar
MACIS 2019: 8th Int. Conf. on Mathematical Aspects of Computer and Information Sciences
Gebze/Istanbul, Turkey, 13 November - 15 November 2019

Event date: 13 November to 15 November 2019

Submission deadline: 20 August 2019

Notification: 10 October 2019

Submission deadline: 20 August 2019

Notification: 10 October 2019

ePrint Report
Agree-and-Prove: Generalized Proofs Of Knowledge and Applications
Christian Badertscher, Daniel Jost, Ueli Maurer

Proofs of knowledge (PoK) are one of the most fundamental notions in cryptography and have been used as a building block in numerous applications. The appeal of this notion is that it is parameterized by generic relations which an application can suitably instantiate. On the other hand, in many applications, a more generalized proof system would be desirable that captures aspects not considered by the low-level abstraction boundary of PoKs. First, the context in which the protocol is executed is encoded using a static auxiliary input, which is insufficient to represent a world with more dynamic setup, or even the case where the relation to be proven does depend on a setup. Second, proofs of knowledge do by definition not take into account the statement derivation process. Yet, it often impacts either the complexity of the associated interactive proof or the effective zero-knowledge guarantees that can still be provided by the proof system. Some of this critique has been observed and partially addressed by Bernhard et al. (PKC'15), who consider PoK in the presence of a random oracle, and Choudhuri et al. (Eurocrypt'19), who need PoK schemes in the presence of a ledger functionality.

However, the theoretical foundation of a generalized notion of PoK with setup-dependent relations is still missing. As a first contribution, we introduce this new notion and call it agree-and-proof. Agree-and-prove rigorously extends the basic PoK framework to include the missing aspects. The new notion provides clear semantics of correctness, soundness, and zero-knowledge in the presence of generic setup and under dynamic statement derivation.

As a second contribution, we show that the agree-and-prove notion is the natural abstraction for applications that are in fact generalized PoKs, but for which the existing isolated notions do not reveal this intrinsic connection. First, we consider proofs-of-ownership of files for client-side file deduplication. We cast the problem and some of its prominent schemes in our agree-and-prove framework and formally analyze their security. Finally, leveraging our generalized zero-knowledge formalization, we devise a novel scheme that is provably the privacy-preserving analogon of the known Merkle-Tree based proof-of-ownership protocol. As a second application, we consider entity authentication and two-factor authentication. We thereby demonstrate that the agree-and-prove notion can not only phrase generalized PoKs, but also, along the same lines, proofs of possession or ability, such as proving the correct usage of a hardware token.

However, the theoretical foundation of a generalized notion of PoK with setup-dependent relations is still missing. As a first contribution, we introduce this new notion and call it agree-and-proof. Agree-and-prove rigorously extends the basic PoK framework to include the missing aspects. The new notion provides clear semantics of correctness, soundness, and zero-knowledge in the presence of generic setup and under dynamic statement derivation.

As a second contribution, we show that the agree-and-prove notion is the natural abstraction for applications that are in fact generalized PoKs, but for which the existing isolated notions do not reveal this intrinsic connection. First, we consider proofs-of-ownership of files for client-side file deduplication. We cast the problem and some of its prominent schemes in our agree-and-prove framework and formally analyze their security. Finally, leveraging our generalized zero-knowledge formalization, we devise a novel scheme that is provably the privacy-preserving analogon of the known Merkle-Tree based proof-of-ownership protocol. As a second application, we consider entity authentication and two-factor authentication. We thereby demonstrate that the agree-and-prove notion can not only phrase generalized PoKs, but also, along the same lines, proofs of possession or ability, such as proving the correct usage of a hardware token.

ePrint Report
Mind the Portability: A Warriors Guide through Realistic Profiled Side-channel Analysis
Shivam Bhasin, Anupam Chattopadhyay, Annelie Heuser, Dirmanto Jap, Stjepan Picek, Ritu Ranjan Shrivastwa

Profiled side-channel attacks represent a practical threat to digital devices, thereby having the potential to disrupt the foundation of e-commerce, Internet-of-Things (IoT), and smart cities. In the profiled side-channel attack, adversary gains knowledge about the target device by getting access to a cloned device. Though these two devices are different in real-world scenarios, yet, unfortunately, a large part of research works simplifies the setting by using only a single device for both profiling and attacking. There, the portability issue is conveniently ignored in order to ease the experimental procedure. In parallel to the above developments, machine learning techniques are used in recent literature demonstrating excellent performance in profiled side-channel attacks. Again, unfortunately, the portability is neglected.
In this paper, we consider realistic side-channel scenarios and commonly used machine learning techniques to evaluate the influence of portability on the efficacy of an attack. Our experimental results show that portability plays an important role and should not be disregarded as it contributes to a significant overestimate of the attack efficiency, which can easily be an order of magnitude size. After establishing the importance of portability, we propose a new model called the Multiple Device Model (MDM) that formally incorporates the device to device variation during a profiled side-channel attack. We show through experimental studies, how machine learning and MDM significantly enhances the capacity for practical side-channel attacks. More precisely, we demonstrate how MDM is able to improve the results by $>10\times$, completely negating the influence of portability.

ePrint Report
Lattice Gaussian Sampling by Markov Chain Monte Carlo: Bounded Distance Decoding and Trapdoor Sampling
Zheng Wang, Cong Ling

Sampling from the lattice Gaussian distribution plays an important role in various research fields. In this paper, the Markov chain Monte Carlo (MCMC)-based sampling technique is advanced in several fronts. Firstly, the spectral gap for the independent Metropolis-Hastings-Klein (MHK) algorithm is derived, which is then extended to Peikert's algorithm and rejection sampling; we show that independent MHK exhibits faster convergence. Then, the performance of bounded distance decoding using MCMC is analyzed, revealing a flexible trade-off between the decoding radius and complexity. MCMC is further applied to trapdoor sampling, again offering a trade-off between security and complexity. Finally, the independent multiple-try Metropolis-Klein (MTMK) algorithm is proposed to enhance the convergence rate. The proposed algorithms allow parallel implementation, which is beneficial for practical applications.

ePrint Report
Tight Verifiable Delay Functions
Nico Döttling, Sanjam Garg, Giulio Malavolta, Prashant Nalini Vasudevan

A Verifiable Delay Function (VDF) is a function that takes at least $T$ sequential steps to evaluate and produces a unique output that can be verified efficiently, in time essentially independent of $T$. In this work we study tight VDFs, where the function can be evaluated in time not much more than the sequentiality bound $T$.

On the negative side, we show the impossibility of a black-box construction from random oracles of a VDF that can be evaluated in time $T + O(T^\delta)$ for any constant $\delta < 1$. On the positive side, we show that any VDF with an inefficient prover (running in time $cT$ for some constant $c$) that has a natural self-composability property can be generically transformed into a VDF with a tight prover efficiency of $T+O(1)$. Our compiler introduces only a logarithmic factor overhead in the proof size and in the number of parallel threads needed by the prover. As a corollary, we obtain a simple construction of a tight VDF from any succinct non-interactive argument combined with repeated hashing. This is in contrast with prior generic constructions (Boneh et al, CRYPTO 2018) that required the existence of incremental verifiable computation, which entails stronger assumptions and complex machinery.

On the negative side, we show the impossibility of a black-box construction from random oracles of a VDF that can be evaluated in time $T + O(T^\delta)$ for any constant $\delta < 1$. On the positive side, we show that any VDF with an inefficient prover (running in time $cT$ for some constant $c$) that has a natural self-composability property can be generically transformed into a VDF with a tight prover efficiency of $T+O(1)$. Our compiler introduces only a logarithmic factor overhead in the proof size and in the number of parallel threads needed by the prover. As a corollary, we obtain a simple construction of a tight VDF from any succinct non-interactive argument combined with repeated hashing. This is in contrast with prior generic constructions (Boneh et al, CRYPTO 2018) that required the existence of incremental verifiable computation, which entails stronger assumptions and complex machinery.

ePrint Report
Two-Thirds Honest-Majority MPC for Malicious Adversaries at Almost the Cost of Semi-Honest
Jun Furukawa, Yehuda Lindell

Secure multiparty computation (MPC) enables a set of parties to securely carry out a joint computation of their private inputs without revealing anything but the output. Protocols for semi-honest adversaries guarantee security as long as the corrupted parties run the specified protocol and ensure that nothing is leaked in the transcript. In contrast, protocols for malicious adversaries guarantee security in the presence of arbitrary adversaries who can run any attack strategy. Security for malicious adversaries is typically what is needed in practice (and is always preferred), but comes at a significant cost.

In this paper, we present the first protocol for a two-thirds honest majority that achieves security in the presence of malicious adversaries at essentially the exact same cost as the best known protocols for semi-honest adversaries. Our construction is not a general transformation and thus it is possible that better semi-honest protocols will be constructed which do not support our transformation. Nevertheless, for the current state-of-the-art for many parties (based on Shamir sharing), our protocol invokes the best semi-honest multiplication protocol exactly once per multiplication gate (plus some additional local computation that is negligible to the overall cost). Concretely, the best version of our protocol requires each party to send on average of just $2\frac23$ elements per multiplication gate (when the number of multiplication gates is at least the number of parties). This is four times faster than the previous-best protocol of Barak et al. (ACM CCS 2018) for small fields, and twice as fast as the previous-best protocol of Chida et al. (CRYPTO 2018) for large fields.

In this paper, we present the first protocol for a two-thirds honest majority that achieves security in the presence of malicious adversaries at essentially the exact same cost as the best known protocols for semi-honest adversaries. Our construction is not a general transformation and thus it is possible that better semi-honest protocols will be constructed which do not support our transformation. Nevertheless, for the current state-of-the-art for many parties (based on Shamir sharing), our protocol invokes the best semi-honest multiplication protocol exactly once per multiplication gate (plus some additional local computation that is negligible to the overall cost). Concretely, the best version of our protocol requires each party to send on average of just $2\frac23$ elements per multiplication gate (when the number of multiplication gates is at least the number of parties). This is four times faster than the previous-best protocol of Barak et al. (ACM CCS 2018) for small fields, and twice as fast as the previous-best protocol of Chida et al. (CRYPTO 2018) for large fields.

Private Simultaneous Messages (PSM) is a minimal model for information-theoretic non-interactive multi-party computation. In the 2-party case, Beimel et al. showed every function $f:[N]\times[N]\to\{0,1\}$ admits a 2-party PSM with communication complexity $O(\sqrt N)$. Recently, Applebaum et al. studied the multi-party case, showed every function $f:[N]^3\to\{0,1\}$ admits a 3-party PSM with communication computing $O(N)$.

We provide new upper bounds for general $k$-party case. Our upper bounds matches previous results when $k=2$ or $3$, and improve the communication complexity for infinitely many $k>3$. Our technique also implies 2-party PSM with unbalanced communication complexity. Concretely,

- For infinitely many $k$ --- in particular, including all $k \leq 19$ --- we construct $k$-party PSM protocols for arbitrary function $f:[N]^k\to\{0,1\}$, whose communication complexity is $O_k(N^{\frac{k-1}{2}})$. We also provide evidence suggesting the existence of such protocol for all $k$.

- For many $0<\eta<1$ --- including all rational $\eta = d/k$ such that $k\leq 12$ --- we construct 2-party PSM protocols for arbitrary function $f:[N]\times[N]\to\{0,1\}$, whose communication complexity is $O_\eta(N^\eta)$, $O_\eta(N^{1-\eta})$. We also provide evidence suggesting the existence of such protocol for all rational $\eta$.

We provide new upper bounds for general $k$-party case. Our upper bounds matches previous results when $k=2$ or $3$, and improve the communication complexity for infinitely many $k>3$. Our technique also implies 2-party PSM with unbalanced communication complexity. Concretely,

- For infinitely many $k$ --- in particular, including all $k \leq 19$ --- we construct $k$-party PSM protocols for arbitrary function $f:[N]^k\to\{0,1\}$, whose communication complexity is $O_k(N^{\frac{k-1}{2}})$. We also provide evidence suggesting the existence of such protocol for all $k$.

- For many $0<\eta<1$ --- including all rational $\eta = d/k$ such that $k\leq 12$ --- we construct 2-party PSM protocols for arbitrary function $f:[N]\times[N]\to\{0,1\}$, whose communication complexity is $O_\eta(N^\eta)$, $O_\eta(N^{1-\eta})$. We also provide evidence suggesting the existence of such protocol for all rational $\eta$.

SAT-attack is known to successfully decrypt a functionally correct key of a locked combinational circuit. It is possible to extend the SAT-attack to sequential circuits through the scan-chain by selectively initializing the combinational logic and analyzing the responses. Recently, sequential locking was proposed as a defense to SAT-attack, which works by locking the scan-chains of flip-flops. ScanSAT [1], however, showed that it is possible to convert the sequentially locked instance to a locked combinational instance, and thereby decrypt the entire sequential key using SAT-attack. In this paper, we propose SeqL, a secure sequential lock defense against ScanSAT. SeqL provides functional isolation, and also encrypts selective flip-flop inputs, thereby mitigating ScanSAT and other related SAT-attacks. We conduct a formal study of the sequential locking problem and demonstrate automating our proposed defense on any given sequential circuit. We show that SeqL hides functionally correct keys from the attacker, thereby increasing the likelihood of functional output corruption. When tested on sequential benchmarks (ITC’99) and pipelined combinational benchmarks (ISCAS’85, MCNC), SeqL gave 100% resilience to ScanSAT.

There are many proposed lattice-based encryption systems. How do these systems compare in the security that they provide against known attacks, under various limits on communication volume? There are several reasons to be skeptical of graphs that claim to answer this question. Part of the problem is with the underlying data points, and part of the problem is with how the data points are converted into graphs.

ePrint Report
Compact linkable ring signatures and applications
Brandon Goodell, Sarang Noether, RandomRun

We describe an efficient linkable ring signature scheme, compact linkable spontaneous anonymous group (CLSAG) signatures, for use in confidential transactions. Compared to the existing signature scheme used in Monero, CLSAG signatures are both smaller and more efficient to generate and verify for ring sizes of interest. We generalize the construction and show how it can be used to produce signatures with coins of different type in the same transaction.

ePrint Report
On the Local Leakage Resilience of Linear Secret Sharing Schemes
Fabrice Benhamouda, Akshay Degwekar, Yuval Ishai, Tal Rabin

We consider the following basic question: to what extent are standard secret sharing schemes and protocols for secure multiparty computation that build on them resilient to leakage? We focus on a simple local leakage model, where the adversary can apply an arbitrary function of a bounded output length to the secret state of each party, but cannot otherwise learn joint information about the states.

We show that additive secret sharing schemes and high-threshold instances of Shamir’s secret sharing scheme are secure under local leakage attacks when the underlying field is of a large prime order and the number of parties is sufficiently large. This should be contrasted with the fact that any linear secret sharing scheme over a small characteristic field is clearly insecure under local leakage attacks, regardless of the number of parties. Our results are obtained via tools from Fourier analysis and additive combinatorics.

We present two types of applications of the above results and techniques. As a positive application, we show that the “GMW protocol” for honest-but-curious parties, when implemented using shared products of random field elements (so-called “Beaver Triples”), is resilient in the local leakage model for sufficiently many parties and over certain fields. This holds even when the adversary has full access to a constant fraction of the views. As a negative application, we rule out multiparty variants of the share conversion scheme used in the 2-party homomorphic secret sharing scheme of Boyle et al. (Crypto 2016).

We show that additive secret sharing schemes and high-threshold instances of Shamir’s secret sharing scheme are secure under local leakage attacks when the underlying field is of a large prime order and the number of parties is sufficiently large. This should be contrasted with the fact that any linear secret sharing scheme over a small characteristic field is clearly insecure under local leakage attacks, regardless of the number of parties. Our results are obtained via tools from Fourier analysis and additive combinatorics.

We present two types of applications of the above results and techniques. As a positive application, we show that the “GMW protocol” for honest-but-curious parties, when implemented using shared products of random field elements (so-called “Beaver Triples”), is resilient in the local leakage model for sufficiently many parties and over certain fields. This holds even when the adversary has full access to a constant fraction of the views. As a negative application, we rule out multiparty variants of the share conversion scheme used in the 2-party homomorphic secret sharing scheme of Boyle et al. (Crypto 2016).

ePrint Report
The Exchange Attack: How to Distinguish 6 Rounds of AES with $2^{88.2}$ chosen plaintexts
Navid Ghaedi Bardeh, Sondre Rønjom

In this paper we present exchange equivalence attacks which is a cryptanalytic attack technique suitable for SPN-like block cipher designs. Our new technique results in a secret-key chosen plaintext distinguisher for 6-round AES. The complexity of the distinguisher is about $2^{88.2}$ in terms of data, memory and computational complexity. The distinguishing attack for AES reduced to 6 rounds is a straight-forward extension of an exchange attack for 5-round AES that requires about $2^{30}$ in terms of chosen plaintexts and computation. This is also a new record for AES reduced to 5 rounds. The main result of this paper is that AES up to at least 6 rounds is biased when restricted to exchange invariant sets of plaintexts.

ePrint Report
Efficient MPC via Program Analysis: A Framework for Efficient Optimal Mixing
Muhammad Ishaq, Ana Milanova, Vassilis Zikas

Multi-party computation (MPC) protocols have been extensively optimized in an effort to bring this technology to practice, which has already started bearing fruits. The choice of which MPC protocol to use depends on the computation we are trying to perform. Protocol mixing is an effective black-box ---with respect to the MPC protocols---approach to optimize performance. Despite, however, considerable progress in the recent years existing works are heuristic and either give no guarantee or require an exponential (brute-force) search to find the optimal assignment, a problem which was conjectured to be NP hard.

We provide a theoretically founded approach to optimal (MPC) protocol assignment, i.e., optimal mixing, and prove that under mild and natural assumptions, the problem is tractable both in theory and in practice for computing best two-out-of-three combinations. Concretely, for the case of two protocols, we utilize program analysis techniques---which we tailor to MPC---to define a new integer program, which we term the ``Optimal Protocol Assignment" (in short, OPA) problem whose solution is the optimal (mixed) protocol assignment for these two protocols. Most importantly, we prove that the solution to the linear program corresponding to the relaxation of OPA is integral, and hence is also a solution to OPA. Since linear programming can be efficiently solved, this yields the first efficient protocol mixer. We showcase the quality of our OPA solver by applying it to standard benchmarks from the mixing literature. Our OPA solver can be applied on any two-out-of-three protocol combinations to obtain a best two-out-of-three protocol assignment.

We provide a theoretically founded approach to optimal (MPC) protocol assignment, i.e., optimal mixing, and prove that under mild and natural assumptions, the problem is tractable both in theory and in practice for computing best two-out-of-three combinations. Concretely, for the case of two protocols, we utilize program analysis techniques---which we tailor to MPC---to define a new integer program, which we term the ``Optimal Protocol Assignment" (in short, OPA) problem whose solution is the optimal (mixed) protocol assignment for these two protocols. Most importantly, we prove that the solution to the linear program corresponding to the relaxation of OPA is integral, and hence is also a solution to OPA. Since linear programming can be efficiently solved, this yields the first efficient protocol mixer. We showcase the quality of our OPA solver by applying it to standard benchmarks from the mixing literature. Our OPA solver can be applied on any two-out-of-three protocol combinations to obtain a best two-out-of-three protocol assignment.

ePrint Report
Incremental Proofs of Sequential Work
Nico Döttling, Russell W. F. Lai, Giulio Malavolta

A proof of sequential work allows a prover to convince a verifier that a certain amount of sequential steps have been computed. In this work we introduce the notion of incremental proofs of sequential work where a prover can carry on the computation done by the previous prover incrementally, without affecting the resources of the individual provers or the size of the proofs.

To date, the most efficient instance of proofs of sequential work [Cohen and Pietrzak, Eurocrypt 2018] for $N$ steps require the prover to have $\sqrt{N}$ memory and to run for $N + \sqrt{N}$ steps. Using incremental proofs of sequential work we can bring down the prover's storage complexity to $\log N$ and its running time to $N$.

We propose two different constructions of incremental proofs of sequential work: Our first scheme requires a single processor and introduces a poly-logarithmic factor in the proof size when compared with the proposals of Cohen and Pietrzak. Our second scheme assumes $\log N$ parallel processors but brings down the overhead of the proof size to a factor of $9$. Both schemes are simple to implement and only rely on hash functions (modelled as random oracles).

To date, the most efficient instance of proofs of sequential work [Cohen and Pietrzak, Eurocrypt 2018] for $N$ steps require the prover to have $\sqrt{N}$ memory and to run for $N + \sqrt{N}$ steps. Using incremental proofs of sequential work we can bring down the prover's storage complexity to $\log N$ and its running time to $N$.

We propose two different constructions of incremental proofs of sequential work: Our first scheme requires a single processor and introduces a poly-logarithmic factor in the proof size when compared with the proposals of Cohen and Pietrzak. Our second scheme assumes $\log N$ parallel processors but brings down the overhead of the proof size to a factor of $9$. Both schemes are simple to implement and only rely on hash functions (modelled as random oracles).

ePrint Report
Txilm: Lossy Block Compression with Salted Short Hashing
Donghui Ding, Xin Jiang, Jiaping Wang, Hao Wang, Xiaobing Zhang, Yi Sun

Current blockchains are restricted by the low throughput. Aimed at this problem, we propose Txilm, a protocol that compresses the size of transaction presentation in each block and thus saves the bandwidth of the blockchain network. In this protocol, a block carries short hashes of TXIDs instead of complete transactions. Combined with the transaction list sorted by TXIDs, Txilm realizes 80 times of data size reduction compared with the original blockchains. We also evaluate the probability of hash collisions, and provide methods of resolving such collisions. Finally, we design strategies to protect against possible attacks on Txilm.

ePrint Report
Efficient Invisible and Unlinkable Sanitizable Signatures
Xavier Bultel, Pascal Lafourcade, Russell W. F. Lai, Giulio Malavolta, Dominique Schröder, Sri Aravinda Krishnan Thyagarajan

Sanitizable signatures allow designated parties (the sanitizers) to apply arbitrary modifications to some restricted parts of signed messages. A secure scheme should not only be unforgeable, but also protect privacy and hold both the signer and the sanitizer accountable. Two important security properties that are seemingly difficult to achieve simultaneously and efficiently are invisibility and unlinkability. While invisibility ensures that the admissible modifications are hidden from external parties, unlinkability says that sanitized signatures cannot be linked to their sources. Achieving both properties simultaneously is crucial for applications where sensitive personal data is signed with respect to data-dependent admissible modifications. The existence of an efficient construction achieving both properties was recently posed as an open question by Camenisch et al. (PKC’17).
In this work, we propose a solution to this problem with a two-step construction. First, we construct (non-accountable) invisible and unlinkable sanitizable signatures from signatures on equivalence classes and other basic primitives. Second, we put forth a generic transformation using verifiable ring signatures to turn any non-accountable sanitizable signature into an accountable one while preserving all other properties. When instantiating in the generic group and random oracle model, the efficiency of our construction is comparable to that of prior constructions, while providing stronger security guarantees.