International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

26 February 2019

Michael Klooß, Anja Lehmann, Andy Rupp
ePrint Report ePrint Report
An updatable encryption scheme allows a data host to update ciphertexts of a client from an old to a new key, given so-called update tokens from the client. Rotation of the encryption key is a common requirement in practice in order to mitigate the impact of key compromises over time. There are two incarnations of updatable encryption: One is ciphertext-dependent, i.e. the data owner has to (partially) download all of his data and derive a dedicated token per ciphertext. Everspaugh et al. (CRYPTO'17) proposed CCA and CTXT secure schemes in this setting. The other, more convenient variant is ciphertext-independent, i.e., it allows a single token to update all ciphertexts. However, so far, the broader functionality of tokens in this setting comes at the price of considerably weaker security: the existing schemes by Boneh et al. (CRYPTO'13) and Lehmann and Tackmann (EUROCRYPT'18) only achieve CPA security and provide no integrity protection. Arguably, when targeting the scenario of outsourcing data to an untrusted host, plaintext integrity should be a minimal security requirement. Otherwise, the data host may alter or inject ciphertexts arbitrarily. Indeed, the schemes from BLMR13 and LT18 suffer from this weakness, and even EPRS17 only provides integrity against adversaries which cannot arbitrarily inject ciphertexts. In this work, we provide the first ciphertext-independent updatable encryption schemes with security beyond \CPA, in particular providing strong integrity protection. Our constructions and security proofs of updatable encryption schemes are surprisingly modular. We give a generic transformation that allows key-rotation and confidentiality/integrity of the scheme to be treated almost separately, i.e., security of the updatable scheme is derived from simple properties of its static building blocks. An interesting side effect of our generic approach is that it immediately implies the unlinkability of ciphertext updates that was introduced as an essential additional property of updatable encryption by EPRS17 and LT18.
Expand
Shuichi Katsumata, Shota Yamada
ePrint Report ePrint Report
In a group signature scheme, users can anonymously sign messages on behalf of the group they belong to, yet it is possible to trace the signer when needed. Since the first proposal of lattice-based group signatures in the random oracle model by Gordon, Katz, and Vaikuntanathan (ASIACRYPT 2010), the realization of them in the standard model from lattices has attracted much research interest, however, it has remained unsolved. In this paper, we make progress on this problem by giving the first such construction. Our schemes satisfy CCA-selfless anonymity and full traceability, which are the standard security requirements for group signatures proposed by Bellare, Micciancio, and Warinschi (EUROCRYPT 2003) with a slight relaxation in the anonymity requirement suggested by Camenisch and Groth (SCN 2004). We emphasize that even with this relaxed anonymity requirement, all previous group signature constructions rely on random oracles or NIZKs, where currently NIZKs are not known to be implied from lattice-based assumptions. We propose two constructions that provide tradeoffs regarding the security assumption and efficiency:

- Our first construction is proven secure assuming the standard LWE and the SIS assumption. The sizes of the public parameters and the signatures grow linearly in the number of users in the system. - Our second construction is proven secure assuming the standard LWE and the subexponential hardness of the SIS problem. The sizes of the public parameters and the signatures are independent of the number of users in the system.

Technically, we obtain the above schemes by combining a secret key encryption scheme with additional properties and a special type of attribute-based signature (ABS) scheme, thus bypassing the utilization of NIZKs. More specifically, we introduce the notion of \emph{indexed} ABS, which is a relaxation of standard ABS. The above two schemes are obtained by instantiating the indexed ABS with different constructions. One is a direct construction we propose and the other is based on previous work.
Expand
Ivan Damgård, Kasper Green Larsen, Jesper Buus Nielsen
ePrint Report ePrint Report
We prove a lower bound on the communication complexity of unconditionally secure multiparty computation, both in the standard model with $n=2t+1$ parties of which $t$ are corrupted, and in the preprocessing model with $n=t+1$. In both cases, we show that for any $g \in \mathbb{N}$ there exists a Boolean circuit $C$ with $g$ gates, where any secure protocol implementing $C$ must communicate $\Omega(n g)$ bits, even if only passive and statistical security is required. The results easily extends to constructing similar circuits over any fixed finite field. This shows that for all sizes of circuits, the $O(n)$ overhead of all known protocols when $t$ is maximal is inherent. It also shows that security comes at a price: the circuit we consider could namely be computed among $n$ parties with communication only $O(g)$ bits if no security was required. Our results extend to the case where the threshold $t$ is suboptimal. For the honest majority case, this shows that the known optimizations via packed secret-sharing can only be obtained if one accepts that the threshold is $t= (1/2 - c)n$ for a constant $c$. For the honest majority case, we also show an upper bound that matches the lower bound up to a constant factor (existing upper bounds are a factor $\log n$ off for Boolean circuits).
Expand
Tom Close
ePrint Report ePrint Report
State channels are an important technique for scaling blockchains, allowing a fixed set of participants to jointly run an application in order to determine how a set of assets should be distributed between them. In this paper, we present a new protocol for constructing state channel networks, allowing state channels to be opened and closed without on-chain transactions and decreasing the number of deposits that need to be held. The protocol readily extends to $n$-party channels and we include the construction of a 3-party virtual channel.
Expand
Akshay Degwekar, Vinod Vaikuntanathan
ePrint Report ePrint Report
We continue the study of statistical/computational tradeoffs in learning robust classifiers, following the recent work of Bubeck, Lee, Price and Razenshteyn who showed examples of classification tasks where (a) an efficient robust classifier exists, *in the small-perturbation regime*; (b) a non-robust classifier can be learned efficiently; but (c) it is computationally hard to learn a robust classifier, assuming the hardness of factoring large numbers. Indeed, the question of whether a robust classifier for their task exists in the large perturbation regime seems related to important open questions in computational number theory.

In this work, we extend their work in three directions.

First, we demonstrate classification tasks where computationally efficient robust classification is impossible, even when computationally unbounded robust classifiers exist. We rely on the hardness of decoding problems with preprocessing on codes and lattices.

Second, we show hard-to-robustly-learn classification tasks *in the large-perturbation regime*. Namely, we show that even though an efficient classifier that is very robust (namely, tolerant to large perturbations) exists, it is computationally hard to learn any non-trivial robust classifier. Our first task relies on the existence of one-way functions, a minimal assumption in cryptography, and the second on the hardness of the learning parity with noise problem. In the latter setting, not only does a non-robust classifier exist, but also an efficient algorithm that generates fresh new labeled samples given access to polynomially many training examples (termed as generation by Kearns et. al. (1994)).

Third, we show that any such counterexample implies the existence of cryptographic primitives such as one-way functions or even forms of public-key encryption. This leads us to a win-win scenario: either we can quickly learn an efficient robust classifier, or we can construct new instances of popular and useful cryptographic primitives.
Expand
Guillermo Sosa Gómez, Octavio Paez Osuna
ePrint Report ePrint Report
In 2005, [2] Philippe Guillot presented a new construction of Boolean functions using linear codes as an extension of Maiorana-McFarland's construction of bent functions. In this paper, we study a new family of Boolean functions with cryptographically strong properties such as non- linearity, propagation criterion, resiliency and balance. The construction of cryptographically strong boolean functions is a daunting task and there is currently a wide range of algebraic techniques and heuristics for constructing such functions , however these methods can be complex, computationally difficult to implement and not always produce a sufficient variety of functions. We present in this paper a construction of Boolean functions using algebraic codes following Guillot's work.
Expand
Arka Rai Choudhuri, Vipul Goyal, Abhishek Jain
ePrint Report ePrint Report
We study the problem of constructing secure multiparty computation (MPC) protocols in the standard broadcast communication model from {\em minimal} assumptions. We focus on security in the plain model against malicious adversaries who may corrupt any majority of parties. In this setting, we first identify $k$-round ``bidirectional'' oblivious transfer (OT) as the minimal assumption for $k$-round MPC. In a bidirectional OT, each round consists of messages from both the OT sender and receiver (as opposed to alternating-message OT, where a round consists of a message from only one of the parties).

Since four rounds are necessary for MPC, we next investigate the possibility of constructing $k$-round MPC for every $k\geq 4$ from minimal assumptions. We provide a nearly full resolution:

- We construct four round MPC based on any four round bidirectional OT and injective one-way functions.

- For any $k>4$, we construct $k$-round MPC based on any $k$-round bidirectional OT.

Our second result is optimal and the first result is nearly optimal. Previously, four round MPC protocols were only known based on stronger assumptions, and five or more round MPC protocols were known based on alternating-message OT.
Expand
Alice Pellet-Mary, Guillaume Hanrot, Damien Stehlé
ePrint Report ePrint Report
We describe an algorithm to solve the approximate Shortest Vector Problem for lattices corresponding to ideals of the ring of integers of an arbitrary number field $K$. This algorithm has a pre-processing phase, whose run-time is exponential in $\log |\Delta|$ with $\Delta$ the discriminant of $K$. Importantly, this pre-processing phase depends only on $K$. The pre-processing phase outputs an advice, whose bit-size is no more than the run-time of the query phase. Given this advice, the query phase of the algorithm takes as input any ideal $I$ of the ring of integers, and outputs an element of $I$ which is at most $\exp(\widetilde O((\log |\Delta|)^{\alpha+1}/n))$ times longer than a shortest non-zero element of $I$ (with respect to the Euclidean norm of its canonical embedding). This query phase runs in time and space $\exp(\widetilde O( (\log |\Delta|)^{\max(2/3, 1-2\alpha)}))$ in the classical setting, and $\exp(\widetilde O((\log |\Delta|)^{1-2\alpha}))$ in the quantum setting. The parameter $\alpha$ can be chosen arbitrarily in $[0,1/2]$. Both correctness and cost analyses rely on heuristic assumptions, whose validity is consistent with experiments.

The algorithm builds upon the algorithms from Cramer al. [EUROCRYPT 2016] and Cramer et al. [EUROCRYPT 2017]. It relies on the framework from Buchmann [Séminaire de théorie des nombres 1990], which allows to merge them and to extend their applicability from prime-power cyclotomic fields to all number fields. The cost improvements are obtained by allowing precomputations that depend on the field only.
Expand
Michele Ciampi, Rafail Ostrovsky
ePrint Report ePrint Report
In this work we continue the study on the round complexity of secure multi-party computation with black-box simulation in the simultaneous broadcast model where all the parties get the output.

In Eurocrypt 2016 Garg at al. show that four rounds are necessary to obtain a secure multi-party computation protocol for any function in the plain model. Many different works have tried to show that, relying on standard assumptions, four rounds are also sufficient for MPC. In Crypto 2017 Ananth et al. and in TCC 2017 Brakerski at al. propose a four-round protocol based on quasi-polynomial time number theoretic assumptions. In Crypto 2018 the two independent works of Badrinarayanan et al. and Halevi at al. show how reach the four-round barrier relying on number theoretic polynomial-time assumptions.

In this work we propose a compiler that takes as input a three-round oblivious transfer protocol, and outputs a four-round MPC protocol. Our compiler is also based on two-round witness indistinguishable proof (zap). We also show how to obtain three-round OT assuming sub-exponentially secure trapdoor permutations and zap. As a corollary we obtain the first four-round MPC protocol that relies on general assumptions. Moreover, given the three-round OT from claw-free trapdoor permutation proposed by Hazay at el. in SCN 2016, we obtain the first four-round MPC protocol that is based on general polynomial-time assumptions.
Expand
Mark Zhandry
ePrint Report ePrint Report
We construct deterministic public key encryption secure for any constant number of arbitrarily correlated computationally unpredictable messages. Prior works required either random oracles or non-standard knowledge assumptions. In contrast, our constructions are based on the exponential hardness of DDH, which is plausible in elliptic curve groups. Our central tool is a new trapdoored extremely lossy function, which modifies extremely lossy functions by adding a trapdoor.
Expand
Hossein Oraei, Massoud Hadian Dehkordi
ePrint Report ePrint Report
The Winternitz one-time signature (WOTS) scheme, which can be described using a certain number of so-called ``function chains", plays an important role in the design of both stateless and stateful many-time signature schemes. This work introduces WOTS^GES, a new WOTS type signature scheme in which the need for computing all of the intermediate values of the chains is eliminated. This significantly reduces the number of required operations needed to calculate the algorithms of WOTS^GES. To achieve this results, we have used the concept of ``leveled" multilinear maps which is also referred to as graded encoding schemes. In the context of provable security, we reduce the hardness of graded discrete-logarithm (GDL) problem to the EU-CMA security of WOTS^GES in the standard model.
Expand
Dario Catalano, Mario Di Raimondo, Dario Fiore, Irene Giacomelli
ePrint Report ePrint Report
In this paper we present a new 2-party protocol for secure computation over rings of the form $\mathbb{Z}_{2^k}$. As many recent efficient MPC protocols supporting dishonest majority, our protocol consists of a heavier (input-independent) pre-processing phase and a very efficient online stage. Our offline phase is similar to BeDOZa (Bendlin et al. Eurocrypt 2011) but employs Joye-Libert (JL, Eurocrypt 2013) as underlying homomorphic cryptosystem. JL turns out to be particularly well suited for the ring setting as it naturally supports $\mathbb{Z}_{2^k}$ as underlying message space. Moreover, it enjoys several additional properties (such has valid ciphertext-verifiability and efficiency) that make it a very good fit for MPC in general. As a main technical contribution we show how to take advantage of all these properties (and of more properties that we introduce in this work, such as a ZK proof of correct multiplication) in order to design a two-party protocol that is efficient, fast and easy to implement in practice. Our solution is particularly well suited for relatively large choices of k (e.g. k = 128), but compares favorably with the state of the art solution of SPDZ2k (Cramer et al. Crypto 2018) already for the practically very relevant case of $\mathbb{Z}_{2^{64}}$ .
Expand
Christof Beierle, Gregor Leander, Amir Moradi, Shahram Rasoolzadeh
ePrint Report ePrint Report
Traditionally, countermeasures against physical attacks are integrated into the implementation of cryptographic primitives after the algorithms have been designed for achieving a certain level of cryptanalytic security. This picture has been changed by the introduction of PICARO, ZORRO, and FIDES, where efficient protection against Side-Channel Analysis (SCA) attacks has been considered in their design. In this work we present the tweakable block cipher CRAFT: the efficient protection of its implementations against Differential Fault Analysis (DFA) attacks has been one of the main design criteria, while we provide strong bounds for its security in the related-tweak model. Considering the area footprint of round-based hardware implementations, CRAFT outperforms the other lightweight ciphers with the same state and key size. This holds not only for unprotected implementations but also when fault-detection facilities, side-channel protection, and their combination are integrated into the implementation. In addition to supporting a 64-bit tweak, CRAFT has the additional property that the circuit realizing the encryption can support the decryption functionality as well with very little area overhead.
Expand
Zhenzhen Bao, Jian Guo, San Ling, Yu Sasaki
ePrint Report ePrint Report
In this paper, a platform named PEIGEN is presented to evaluate security, find efficient software/hardware implementations, and generate cryptographic S-boxes. Continuously developed for decades, S-boxes are constantly evolving in terms of the design criteria for both security requirements and software/hardware performances. PEIGEN is aimed to be a platform covering a comprehensive check-list of design criteria of S-boxes appearing in the literature. To do so, the security requirements are first intensively surveyed, existing tools of S-boxes are then comprehensively compared, and finally our platform PEIGEN is presented. The survey part is aimed to be a systematic reference for the theoretical study of S-boxes. The platform is aimed to be an assistant tool for the experimental study and practical use of S-boxes. PEIGEN not only integrates most of the features in existing tools, but also equips with functionalities to evaluate new security-related properties, improves the efficiency of the search algorithms for optimized implementations in several aspects. With the help of this powerful platform, many interesting observations are made in-between the security notations, as well as on the S-boxes used in the existing symmetric-key cryptographic primitives. PEIGEN will become an open platform and welcomes contributions from all parties to help the community to facilitate the research and use of S-boxes.
Expand
Muzhou Li, Kai Hu, Meiqin Wang
ePrint Report ePrint Report
Statistical saturation attack takes advantage of a set of plaintext with some bits fixed while the others vary randomly, and then track the evolution of a non-uniform plaintext distribution through the cipher. Previous statistical saturation attacks are all implemented under single-key setting, and there is no public attack models under related-key/tweak setting. In this paper, we propose a new cryptanalytic method which can be seen as related-key/tweak statistical saturation attack by revealing the link between the related-key/tweak statistical saturation distinguishers and KDIB (Key Difference Invariant Bias) / TDIB (Tweak Difference Invariant Bias) ones. KDIB cryptanalysis was proposed by Bogdanov \emph{et al.} at ASIACRYPT'13 and utilizes the property that there can exist linear trails such that their biases are deterministically invariant under key difference. And this method can be easily extended to TDIB distinguishers if the tweak is also alternated. The link between them provides a new and more efficient way to find related-key/tweak statistical saturation distinguishers in ciphers. Thereafter, an automatic searching algorithm for KDIB/TDIB distinguishers is also given in this paper, which can be implemented to find word-level KDIB distinguishers for S-box based key-alternating ciphers. We apply this algorithm to \texttt{QARMA}-64 and give related-tweak statistical saturation attack for 10-round \texttt{QARMA}-64 with outer whitening key. Besides, an 11-round attack on \texttt{QARMA}-128 is also given based on the TDIB technique. Compared with previous public attacks on \texttt{QARMA} including outer whitening key, all attacks presented in this paper are the best ones in terms of the number of rounds.
Expand
Dragos Rotaru, Tim Wood
ePrint Report ePrint Report
There are two main ways of performing computation on private data: one method uses linear secret-sharing, in which additions require no communication and multiplications require two secrets to be broadcast; the other method is known as circuit garbling, in which a circuit is somehow randomised by one set of parties and then evaluated by another. There are different advantages and disadvantages to each method in terms of communication and computation complexity. The main disadvantage of secret-sharing-based computation is that many non-linear operations require many rounds of communication. On the other hand, garbled circuit (GC) solutions require only constant rounds. Mixed protocols aim to leverage the advantages of both methods by switching between the two dynamically.

In this work we present the first mixed protocol secure in the presence of a dishonest majority for any number of parties and an active adversary. We call the resulting mixed arithmetic/Boolean circuit a marbled circuit 3 . Our implementation showed that mixing protocols in this way allows us to evaluate a linear Support Vector Machine with 400 times fewer AND gates than a solution using GC alone albeit with twice the preprocessing required using only SPDZ (Damgaard et al., CRYPTO ’12). When evaluating over a WAN network, our online phase is 10 times faster than the plain SPDZ protocol.
Expand
James Howe, Ayesha Khalid, Marco Martinoli, Francesco Regazzoni, Elisabeth Oswald
ePrint Report ePrint Report
Lattice-based cryptography is one of the leading candidates for NIST's post-quantum standardisation effort, providing efficient key encapsulation and signature schemes. Most of these schemes base their hardness on variants of LWE, and thus rely heavily on error samplers to provide necessary uncertainty by obfuscating computations on secret information. Because of this it is a clear and obvious target for side-channel analysis, with numerous types of attacks targeting this component to gain secret-key information. In order to bring potential lattice-based cryptographic standards to practical realisation, it is important to protect these modules from past and future fault and side-channel attacks. This paper proposes countermeasures that exploit the distributions expected from these error samples, that is either Gaussian or binomial, by using statistical tests to verify the samplers are operating properly. The novel countermeasures are designed to protect against all previous fault attacks on error samplers. We optimize hardware implementation of the proposed tests to avoid division and square root calculations, however, the countermeasure we propose is sufficiently generic to be suitable also for software. We measure the impact of these countermeasures on performance and area consumption on a Xilinx Artix-7 FPGA. Our countermeasure achieve promising performance while resulting in a minimal overhead.
Expand
Barak Shani
ePrint Report ePrint Report
Using the idea behind the recently proposed isogeny- and paring-based verifiable delay function (VDF) by De Feo, Masson, Petit and Sanso, we construct an isogeny-based VDF without the use of pairings. Our scheme is a hybrid of time-lock puzzles and (trapdoor) verifiable delay functions. We explain how to realise the proposed VDF on elliptic curves with commutative endomorphism ring, however this construction is not quantum secure. The more interesting, and potentially quantum-secure, non-commutative case is left open.
Expand
Barak Shani
ePrint Report ePrint Report
We study the computational hardness of recovering single bits of the private key in the supersingular isogeny Diffie--Hellman (SIDH) key exchange and similar schemes. Our objective is to give a polynomial-time reduction between the problem of computing the private key in SIDH to the problem of computing any of its bits. The parties in the SIDH protocol work over elliptic curve torsion groups of different order $N$. Our results depend on the parity of $N$. Our main result shows that if $N$ is odd, then each of the top and lower $O(\log\log N)$ bits of the private key is as hard to compute, with any noticeable advantage, as the entire key. A similar, but conditional, result holds for each of the middle bits. This condition can be checked, and heuristically holds almost always. The case of even $N$ is a bit more challenging. We give several results, one of which is similar to the result for an odd $N$, under the assumption that one always succeeds to recover the designated bit. To achieve these results we extend the solution to the chosen-multiplier hidden number problem, for domains of a prime-power order, by studying the Fourier coefficients of single-bit functions over these domains.
Expand
Osman Bicer, Alptekin Kupcu
ePrint Report ePrint Report
In this work, we revisit multi-authority attribute based signatures (MA-ABS), and elaborate on the limitations of the current MA-ABS schemes to provide a hard to achieve (yet very useful) combination of features, i.e., decentralization, periodic usage limitation, dynamic revocation of users and attributes, reliable threshold traceability, and authority hiding. In contrast to previous work, we disallow even the authorities to de-anonymize an ABS, and only allow joint tracing by threshold-many tracing authorities. Moreover, in our solution, the authorities cannot sign on behalf of users. In this context, first we define a useful and practical attribute based signature scheme (versatile ABS or VABS) along with the necessary operations and security games to accomplish our targeted functionalities. Second, we provide the first VABS scheme in a modular design such that any application can utilize a subset of the features endowed by our VABS, while omitting the computation and communication overhead of the features that are not needed. Third, we prove the security of our VABS scheme based on standard assumptions, i.e., Strong RSA, DDH, and SDDHI, in the random oracle model. Fourth, we implement our signature generation and verification algorithms, and show that they are practical (for a VABS with 20 attributes, Sign and Verify times are below 1.2 seconds, and the generated signature size is below 0.5 MB).
Expand
◄ Previous Next ►