International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

14 February 2020

Nathan Keller, Asaf Rosemarin
ePrint Report ePrint Report
The HADES design strategy combines the classical SPN construction with the Partial SPN (PSPN) construction, in which at every encryption round, the non-linear layer is applied to only a part of the state. In a HADES design, a middle layer that consists of PSPN rounds is surrounded by outer layers of SPN rounds. The security arguments of HADES with respect to statistical attacks use only the SPN rounds, disregarding the PSPN rounds. This allows the designers to not pose any restriction on the MDS matrix used as the linear mixing operation.

In this paper we show that the choice of the MDS matrix significantly affects the security level provided by HADES designs. If the MDS is chosen properly, then the security level of the scheme against differential and linear attacks is significantly higher than claimed by the designers. On the other hand, weaker choices of the MDS allow for extremely large invariant subspaces that pass the entire middle layer without activating any non-linear operation (a.k.a. S-box).

We showcase our results on the Starkad and Poseidon instantiations of HADES. For Poseidon, we significantly improve the lower bounds on the number of active S-boxes with respect to both differential and linear cryptanalysis provided by the designers -- for example, from 28 to 60 active S-boxes for the t=6 variant. For Starkad, we show that the t=24 variant proposed by the designers admits an invariant subspace of a huge size of $2^{1134}$ that passes any number of PSPN rounds without activating any S-box. Furthermore, we show that the problem can be fixed easily by replacing t with any value that is not divisible by four.
Expand
Santosh Ghosh, Luis S Kida, Soham Jayesh Desai, Reshma Lal
ePrint Report ePrint Report
This paper proposes a method to protect DMA data transfer that can be used to offload computation to an accelerator. The proposal minimizes changes in the hardware platform and to the application and SW stack. The paper de-scribes the end-to-end scheme to protect communication between an appli-cation running inside a SGX enclave and a FPGA accelerator optimized for bandwidth and latency and details the implementation of AES-GCM hard-ware engines with high bandwidth and low latency.
Expand
Christian Badertscher, Ueli Maurer, Christopher Portmann, Guilherme Rito
ePrint Report ePrint Report
This paper takes a fresh approach to systematically characterizing, comparing, and understanding CCA-type security definitions for public-key encryption (PKE), a topic with a long history. The justification for a concrete security definition $X$ is relative to a benchmark application (e.g. confidential communication): Does the use of a PKE scheme satisfying $X$ imply the security of the application? Because unnecessarily strong definitions may lead to unnecessarily inefficient schemes or unnecessarily strong computational assumptions, security definitions should be as weak as possible, i.e. as close as possible to (but above) the benchmark. Understanding the hierarchy of security definitions, partially ordered by the implication (i.e. at least as strong) relation, is hence important, as is placing the relevant applications as benchmark levels within the hierarchy.

CCA-2 security is apparently the strongest notion, but because it is arguably too strong, Canetti, Krawczyk, and Nielsen (Crypto 2003) proposed the relaxed notions of Replayable CCA security (RCCA) as perhaps the weakest meaningful definition, and they investigated the space between CCA and RCCA security by proposing two versions of Detectable RCCA (d-RCCA) security which are meant to ensure that replays of ciphertexts are either publicly or secretly detectable (and hence preventable).

The contributions of this paper are three-fold. First, following the work of Coretti, Maurer, and Tackmann (Asiacrypt 2013), we formalize the three benchmark applications of PKE that serve as the natural motivation for security notions, namely the construction of certain types of (possibly replay-protected) confidential channels (from an insecure and an authenticated communication channel). Second, we prove that RCCA does not achieve the confidentiality benchmark and, contrary to previous belief, that the proposed d-RCCA notions are not even relaxations of CCA-2 security. Third, we propose the natural security notions corresponding to the three benchmarks: an appropriately strengthened version of RCCA to ensure confidentiality, as well as two notions for capturing public and secret replay detectability.
Expand
Eugene Frimpong, Alexandros Bakas, Hai-Van Dang, Antonis Michalas
ePrint Report ePrint Report
Symmetric Searchable Encryption (SSE) allows the outsourcing of encrypted data to possible untrusted third party services while simultaneously giving the opportunity to users to search over the encrypted data in a secure and privacy-preserving way. Currently, the majority of SSE schemes have been designed to fit a typical cloud service scenario where users (clients) encrypt their data locally and upload them securely to a remote location. While this scenario fits squarely the cloud paradigm, it cannot apply to the emerging field of Internet of Things (IoT). This is due to the fact that the performance of most of the existing SSE schemes has been tested using powerful machines and not the constrained devices used in IoT services. The focus of this paper is to prove that SSE schemes can, under certain circumstances, work on constrained devices and eventually be adopted by IoT services. To this end, we designed and implemented a forward private dynamic SSE scheme that can run smoothly on resource-constrained devices. To do so, we adopted a fog node scenario where edge (constrained) devices sense data, encrypt them locally and use the capabilities of fog nodes to store sensed data in a remote location (the cloud). Consequently, end users can search for specific keywords over the stored ciphertexts without revealing anything about their content. Our scheme achieves efficient computational operations and supports the multi-client model. The performance of the scheme is evaluated by conducting extensive experiments. Finally, the security of the scheme is proven through a theoretical analysis that considers the existence of a malicious adversary.
Expand
Stefan Dziembowski, Grzegorz Fabiański, Sebastian Faust, Siavash Riahi
ePrint Report ePrint Report
Most of blockchains do not scale well, i.e., they cannot process quickly large amounts of transactions. Moreover, using blockchains can be expensive in real life, since blockchain operations cost fees. One of the remedies for these problem are \emph{off-chain} (or: \emph{Layer-2}) protocols where the massive bulk of transactions is kept outside of the main blockchain. In the optimistic case, off-chain protocols drastically improve scalability, since typically the users only need to communicate with the blockchain when they enter, or when they exit the system. In the pessimistic case when parties are malicious a ``smart contract'' running on the underlying blockchain guarantees that no coins are stolen.

In this work we initiate the study of the inherent limitations of off-chain protocols. Concretely, we investigate the so-called \emph{Plasma} systems (also called ``commit chains''), and show that malicious parties can always launch an attack that forces the honest parties to communicate large amounts of data to the blockchain. More concretely: the adversary can always (a) either force the honest parties to communicate a lot with the blockchain, even though they did not intend to (this is traditionally called \emph{mass exit}); or (b) an honest party that wants to leave the system needs to quickly communicate large amounts of data to the blockchain. What makes these attacks particularly hard to handle in real life (and also making our result stronger) is that these attacks do not have so-called \emph{uniquely attributable faults}, i.e.~the smart contract cannot determine which party is malicious, and hence cannot force it to pay the fees for the blockchain interaction.

An important implication of our result is that the benefits of two of the most prominent Plasma types, called \emph{Plasma Cash} and \emph{Fungible Plasma}, cannot be achieved simultaneously. Our results apply to every Plasma system, and cannot be circumvent by introducing additional cryptographic assumptions.
Expand
Mohammad Zaheri, Adam O'Neill
ePrint Report ePrint Report
Classically, selective-opening attack (SOA) has been studied for randomized primitives, like randomized encryption schemes and commitments. The study of SOA for deterministic primitives, which presents some unique challenges, was initiated by Bellare et al. (PKC 2015), who showed negative results. Subsequently, Hoang et al. (ASIACRYPT 2016) showed positive results in the non-programmable random oracle model. Here we show the first positive results for SOA security of deterministic primitives in the standard (RO devoid) model. Our results are: \begin{itemize} \item Any $2t$-wise independent hash function is SOA secure for an unbounded number of ``$t$-correlated'' messages, meaning any group of up to $t$ messages are arbitrarily correlated. \item An analogous result for deterministic encryption, from close variant of a NPROM scheme proposed by Hoang et al. \item We connect the one-more-RSA problem of Bellare et al. (J.~Cryptology 2003) to this context and demonstrate this problem is hard under the $\Phi$-Hiding Assumption with large enough encryption exponent. \end{itemize} Our results indicate that SOA for deterministic primitives in the standard model is more tractable than prior work would indicate.
Expand
Dimitris Karakostas, Aggelos Kiayias
ePrint Report ePrint Report
Distributed ledgers based on the Proof-of-Work (PoW) paradigm are typically most vulnerable when mining participation is low. During these periods an attacker can mount devastating attacks, such as double spending or censorship of transactions. Checkpointing has been proposed as a mechanism to mitigate such 51% attacks. The core idea is to employ an external set of parties that securely run an assisting service which guarantees the ledger's properties and can be relied upon at times when the invested hashing power is low. We realize the assisting service in two ways, via checkpointing and timestamping, and show that a ledger, which employs either, is secure with high probability, even in the presence of an adversarial mining majority. We put forth the first rigorous study of checkpointing as a mechanism to protect PoW ledgers from 51% attacks. Notably, our design is the first to offer both consistency and liveness guarantees, even under adversarial mining majorities. Our liveness analysis also identifies a previously undocumented attack, namely front-running, which enables Denial-of-Service against existing checkpointed ledger systems. We showcase the liveness guarantees of our mechanism by evaluating the checkpointed version of Ethereum Classic, a blockchain which recently suffered a 51% attack, and build a federated distributed checkpointing service, which provides high assurance with low performance requirements. Finally, we prove the security of our timestamping mechanism, build a fully decentralized timestamping solution, by utilizing a secure distributed ledger, and evaluate its performance on the existing Bitcoin and Ethereum systems.
Expand
Daan Leermakers, Boris Skoric
ePrint Report ePrint Report
We re-visit Unclonable Encryption as introduced by Gottesman in 2003. We introduce two qubit-based prepare-and-measure Unclonable Encryption schemes with re-usable keys. In our first scheme there is no classical communication from Alice to Bob. Over a noisy channel its communication rate is lower than in Quantum Key Recycling schemes that lack unclonability. Our second scheme needs more rounds but has the advantage that it achieves the same rate as Quantum Key Distribution.

We provide security proofs for both our schemes, based on the diamond norm distance, taking noise into account.
Expand
Martine De Cock, Rafael Dowsley, Anderson C. A. Nascimento, Davis Railsback, Jianwei Shen, Ariel Todoki
ePrint Report ePrint Report
In this paper, we present a secure logistic regression training protocol and its implementation, with a new subprotocol to securely compute the activation function. To the best of our knowledge, we present the fastest existing secure Multi-Party Computation implementation for training logistic regression models on high dimensional genome data distributed across a local area network.
Expand
Saikrishna Badrinarayanan, James Bartusek, Sanjam Garg, Daniel Masny, Pratyay Muhkerjee
ePrint Report ePrint Report
We present a reusable two-round multi-party computation (MPC) protocol from the Decisional Diffie Hellman assumption (DDH). In particular, we show how to upgrade any secure two-round MPC protocol to allow reusability of its first message across multiple computations, using Homomorphic Secret Sharing (HSS) and pseudorandom functions in NC1— each of which can be instantiated from DDH.

In our construction, if the underlying two-round MPC protocol is secure against semi-honest adversaries (in the plain model) then so is our reusable two-round MPC protocol. Similarly, if the underlying two-round MPC protocol is secure against malicious adversaries (in the common random/reference string model) then so is our reusable two-round MPC protocol. Previously, such reusable two-round MPC protocols were only known under assumptions on lattices.

At a technical level, we show how to upgrade any two-round MPC protocol to a first message succinct two-round MPC protocol, where the first message of the protocol is generated independently of the computed circuit (though it is not reusable). This step uses homomorphic secret sharing (HSS) and low-depth pseudorandom functions. Next, we show a generic transformation that upgrades any first message succinct two-round MPC to allow for reusability of its first message.
Expand
Prabhanjan Ananth, Abhishek Jain, Zhengzhong Jin
ePrint Report ePrint Report
The notion of threshold multi-key fully homomorphic encryption (TMK-FHE) [Lopez-Alt, Tromer, Vaikuntanathan, STOC'12] was proposed as a generalization of fully homomorphic encryption to the multiparty setting. In a TMK-FHE scheme for $n$ parties, each party can individually choose a key pair and use it to encrypt its own private input. Given $n$ ciphertexts computed in this manner, the parties can homomorphically evaluate a circuit $C$ over them to obtain a new ciphertext containing the output of $C$, which can then be decrypted via a threshold decryption protocol. The key efficiency property is that the size of the (evaluated) ciphertext is independent of the size of the circuit.

TMK-FHE with one-round threshold decryption, first constructed by Mukherjee and Wichs [Eurocrypt'16], has found several powerful applications in cryptography over the past few years. However, an important drawback of all such TMK-FHE schemes is that they require a common setup which results in applications in the common random string model.

To address this concern, we propose a notion of multiparty homomorphic encryption (MHE) that retains the communication efficiency property of TMK-FHE, but sacrifices on the efficiency of final decryption. Specifically, MHE is defined in a similar manner as TMK-FHE, except that the final output computation process performed locally by each party is ``non-compact'' in that we allow its computational complexity to depend on the size of the circuit. We observe that this relaxation does not have a significant bearing in many important applications of TMK-FHE.

Our main contribution is a construction of MHE from the learning with errors assumption in the plain model. Our scheme can be used to remove the setup in many applications of TMK-FHE. For example, it yields the first construction of low-communication reusable non-interactive MPC in the plain model. To obtain our result, we devise a recursive self-synthesis procedure to transform any ``delayed-function'' two-round MPC protocol into an MHE scheme.
Expand
Xavier Bonnetain, Rémi Bricout, André Schrottenloher, Yixin Shen
ePrint Report ePrint Report
We present new classical and quantum algorithms for solving random subset-sum instances. First, we improve over the Becker-Coron-Joux algorithm (EUROCRYPT 2011) from $\widetilde{\mathcal{O}} \left(2^{0.291 n}\right)$ downto $\widetilde{\mathcal{O}} \left(2^{0.283 n}\right)$, using more general representations with values in $\{-1,0,1,2\}$.

Next, we improve the state of the art of quantum algorithms for this problem in several directions. By combining the Howgrave-Graham-Joux algorithm (EUROCRYPT 2010) and quantum search, we devise an algorithm with asymptotic cost $\widetilde{\mathcal{O}} \left(2^{0.236 n}\right)$, lower than the cost of the quantum walk based on the same classical algorithm proposed by Bernstein, Jeffery, Lange and Meurer (PQCRYPTO 2013). This algorithm has the advantage of using classical memory with quantum random access, while the previously known algorithms used the quantum walk framework, and required quantum memory with quantum random access.

We also propose new quantum walks for subset-sum, performing better than the previous best time complexity of $\widetilde{\mathcal{O}} \left(2^{0.226 n}\right)$ given by Helm and May (TQC 2018). We combine our new techniques to reach a time $\widetilde{\mathcal{O}} \left(2^{0.216 n}\right)$. This time is dependent on a heuristic on quantum walk updates, formalized by Helm and May, that is also required by the previous algorithms. We show how to partially overcome this heuristic, and we obtain an algorithm with quantum time $\widetilde{\mathcal{O}} \left(2^{0.218 n}\right)$ requiring only the standard classical subset-sum heuristics.
Expand

13 February 2020

Genova, Italy, 19 June 2020
Event Calendar Event Calendar
Event date: 19 June 2020
Submission deadline: 16 March 2020
Notification: 12 April 2020
Expand
Jinhyun So, Basak Guler, A. Salman Avestimehr
ePrint Report ePrint Report
Federated learning is gaining significant interests as it enables model training over a large volume of data that is distributedly stored over many users, while protecting the privacy of the individual users. However, a major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In fact, the overhead of state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. We propose a new scheme, named Turbo-Aggregate, that in a network with $N$ users achieves a secure aggregation overhead of $O(N\log{N})$, as opposed to $O(N^2)$, while tolerating up to a user dropout rate of $50\%$. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to $14\times$ speedup over the state-of-the-art schemes with upto $N=200$ users. We also experimentally evaluate the impact of several key network parameters (e.g., user dropout rate, bandwidth, and model size) on the performance of Turbo-Aggregate.
Expand
Stefan Dziembowski, Paweł Kędzior
ePrint Report ePrint Report
\emph{Off-chain channel networks} are one of the most promising technologies for dealing with blockchain scalability and delayed finality issues. Parties that are connected within such networks can send coins to each other without interacting with the blockchain. Moreover, these payments can be ``routed'' over the network. Thanks to this, even the parties that do not have a channel in common can perform payments between each other with a help of intermediaries.

In this paper, we present a new technique (that we call \emph{Dynamic Internal Payment Splitting (DIPS)}) that allows the intermediaries in the network to split the payments into several sub-payments. This can be done recursively multiple times by subsequent intermediaries. Moreover, the resulting ``payment receipts'' can be aggregated by each intermediary into one short receipt that can be propagated back in the network. We present a protocol (that we call ``Ethna'') that uses this technique. We provide a formal security definition of our protocol and we prove that Ethna satisfies it. We also implement a simple variant of Ethna in Solidity and provide some benchmarks.
Expand
Aron Gohr, Sven Jacob, Werner Schindler
ePrint Report ePrint Report
This paper has four main goals. First, we show how we solved the CHES 2018 AES challenge in the contest using essentially a linear classifier combined with a SAT solver and a custom error correction method. This part of the paper has previously appeared in a preprint of our own and later as a contribution to a preprint write-up of the solutions by the three winning teams. This solution serves as a baseline for other solutions explored in the paper.

Second, we develop a novel deep neural network architecture for side-channel analysis that completely breaks the AES challenge, allowing for fairly reliable key recovery with just a single trace on the unknown-device part of the CHES challenge. This solution significantly improves upon all previously published solutions of the AES challenge, including our baseline linear solution.

Third, we consider the question of leakage attribution for both the classifier we used in the challenge and for our deep neural network. Direct inspection of the weight vector of our machine learning model yields a lot of information on the implementation for our linear classifier. For the deep neural network, we test three other strategies (occlusion of traces; inspection of adversarial changes; knowledge distillation) and find that these can yield information on the leakage essentially equivalent to that gained by inspecting the weights of the simpler model.

Fourth, we study the properties of adversarially generated side-channel traces for our model. Partly reproducing recent work on useful features in adversarial examples in our application domain, we find that a linear classifier generalizing to an unseen device much better than our linear baseline can be trained using only adversarial examples (fresh random keys, adversarially perturbed traces) for our deep neural network. This gives a new way of extracting human-usable knowledge from a deep side channel model while also yielding insights on adversarial examples in an application domain where relatively few sources of spurious correlations between data and labels exist.
Expand
Alex Bienstock, Allison Bishop, Eli Goldin, Garrison Grogan, Victor Lecomte
ePrint Report ePrint Report
In the fall of 2018, a professor became obsessed with conspiracy theories of deeper connections between discrete-log based cryptography and lattice based cryptography. That obsession metastisized and spread to some of the students in the professor's cryptography course through a cryptanalysis challenge that was set as a class competition. The students and the professor continued travelling further down the rabbit hole, refusing to stop when the semester was over. Refusing to stop even as some of the students graduated, and really refusing to stop even now, but pausing long enough to write up this chronicle of their exploits.
Expand
Akin Ünal
ePrint Report ePrint Report
Functional Encryption denotes a form of encryption where a master secret key-holder can control which functions a user can evaluate on encrypted data. Learning With Errors (LWE) (Regev, STOC'05) is known to be a useful cryptographic hardness assumption which implies strong primitives such as, for example, fully homomorphic encryption (Brakerski-Vaikuntanathan, FOCS'11) and lockable obfuscation (Goyal et al., Wichs et al., FOCS'17). Despite its strength, however, there is just a limited number of functional encryption schemes which can be based on LWE. In fact, there are functional encryption schemes which can be achieved by using pairings but for which no secure instantiations from lattice-based assumptions are known: function-hiding inner product encryption (Lin, Baltico et al., CRYPTO'17) and compact quadratic functional encryption (Abdalla et al., CRYPTO'18). This raises the question whether there are some mathematical barriers which hinder us from realizing function-hiding and compact functional encryption schemes from lattice-based assumptions as LWE.

To study this problem, we prove an impossibility result for function-hiding functional encryption schemes which meet some algebraic restrictions at ciphertext encryption and decryption. Those restrictions are met by a lot of attribute-based, identity-based and functional encryption schemes whose security stems from LWE. Therefore, we see our results as important indications why it is hard to construct new functional encryption schemes from LWE and which mathematical restrictions have to be overcome to construct secure lattice-based functional encryption schemes for new functionalities.
Expand
Ignacio Cascudo, Jaron Skovsted Gundersen
ePrint Report ePrint Report
We present a new secure multiparty computation protocol that allows for evaluating a number of instances of a boolean circuit in parallel, with a small online communication complexity per instance of $10$ bits per party and multiplication gate. Our protocol is secure against an active adversary corrupting a dishonest majority. The protocol uses an approach introduced recently in the setting of honest majority and information-theoretically security, based on the algebraic notion known as reverse multiplication friendly embeddings, which essentially transforms a batch of evaluations of an arithmetic circuit over a small field into one evaluation of another arithmetic circuit over a larger field. To obtain security against a dishonest majority, we combine this approach with the well-known SPDZ protocol that provides security against a dishonest majority but operates over a large field. As SPDZ and its variants, our protocol operates in the preprocessing model. Structurally our protocol is most similar to MiniMAC, a protocol which bases its security on the use of error-correcting codes, but our protocol has a communication complexity which is half of that of MiniMAC when the best available binary codes are used. With respect to certain variant of MiniMAC that utilizes codes over larger fields, our communication complexity is slightly worse; however, that variant of MiniMAC needs a much larger preprocessing than ours. We also show that our protocol also has smaller amortized communication complexity than Committed MPC, a protocol for general fields based on homomorphic commitments, if we use the best available constructions for those commitments. Finally, we construct a preprocessing phase from oblivious transfer based on ideas from MASCOT and Committed MPC.
Expand
Hanlin Liu, Yu Yu, Shuoyao Zhao, Jiang Zhang, Wenling Liu
ePrint Report ePrint Report
A universal circuit (UC) is a general-purpose circuit that can simulate arbitrary circuits (up to a certain size $n$). Valiant provides a $k$-way recursive construction of universal circuits (STOC 1976), where $k$ tunes the complexity of the recursion. More concretely, Valiant gives theoretical constructions of 2-way and 4-way UCs of asymptotic (multiplicative) sizes $5n\log n$ and $4.75 n\log n$ respectively, which matches the asymptotic lower bound $\Omega(n\log n)$ up to some constant factor.

Motivated by various privacy-preserving cryptographic applications, Kiss et al. (Eurocrypt 2016) validated the practicality of 2-way universal circuits by giving example implementations for private function evaluation. G{\"{u}}nther et al. (Asiacrypt 2017) and Alhassan et al. (ePrint/2019/348) implemented the 2-way/4-way hybrid UCs with various optimizations in place towards making universal circuits more practical. Zhao et al. (Asiacrypt 2019) optimized Valiant's 4-way UC to asymptotic size $4.5 n\log n$ and proved a lower bound $3.64 n\log n$ for UCs under Valiant framework. As the scale of computation goes beyond 10-million-gate ($n=10^7$) or even billion-gate level ($n=10^9$), the constant factor in circuit size plays an increasingly important role in application performance. In this work, we investigate Valiant's universal circuits and present an improved framework for constructing universal circuits with the following advantages.

[*Simplicity*] Parameterization is no longer needed. In contrast to that previous implementations resort to a hybrid construction combining $k=2$ and $k=4$ for a tradeoff between fine granularity and asymptotic size-efficiency, our construction gets the best of both worlds when configured at the lowest complexity (i.e., $k=2$).

[*Compactness*] Our universal circuits have asymptotic size $3n\log n$, improving upon the best previously known $4.5n\log n$ by 33\% and beating the $3.64n\log n$ lower bound for UCs constructed under Valiant's framework (Zhao et al., Asiacrypt 2019).

[*Tightness*] We show that under our new framework the universal circuit size is lower bounded by $2.95 n\log n$, which almost matches the $3n\log n$ circuit size of our 2-way construction.

We implement the 2-way universal circuits and evaluate its performance with other implementations, which confirms our theoretical analysis.
Expand
◄ Previous Next ►