International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

11 February 2018

Jung Hee Cheon, Kyoohyung Han, Andrey Kim, Miran Kim, Yongsoo Song
ePrint Report ePrint Report
This paper extends the leveled homomorphic encryption scheme for an approximate arithmetic of Cheon et al. (ASIACRYPT 2017) to a fully homomorphic encryption, i.e., we propose a new technique to refresh low-level ciphertexts based on Gentry's bootstrapping procedure.

The modular reduction operation is the main bottleneck in the homomorphic evaluation of the decryption circuit. We exploit a scaled sine function as an approximation of the modular reduction operation and present an efficient evaluation strategy. Our method requires only one homomorphic multiplication for each of iterations and so the total computation cost grows linearly with the depth of the decryption circuit.

We also show how to recrypt packed ciphertexts on the RLWE construction with an open-source implementation. For example, it takes 139.8 seconds to refresh a ciphertext that encrypts 128 numbers with 12 bits of precision, yielding an amortized rate of 1.1 seconds per slot.
Expand
Jung-Keun Lee, Bonwook Koo, Woo-Hwan Kim
ePrint Report ePrint Report
We present a general framework of the related-key linear attack that can be applied to a class of block ciphers including the key-alternating iterative ones with linear or affine key schedules. In contrast with the existing linear attacks that use the linear characteristics with linear correlations up to around \pm 2^{-n/2}, where n is the block length of the block cipher, the proposed attack can use linear characteristics with linear correlations up to around \pm 2^{-k/2}, where k is the key length, by exploiting related-key linear approximations introduced in this work. Thus the new attack can cover more rounds than the current linear attacks when the key length is much larger than the block length. The attack utilizes the key differences as the additional data source and it can use data of size larger than the codebook size. We also present a method of the related-key linear attack using multiple independent linear trails that provides a way to estimate the computational complexity, the data complexity, and the success probability of the attack in terms of the capacity of the system of the linear approximations. It is based on hypothesis testing with new decision rule and can be regarded as an alternative approach of the attack provided by A. Biryukov et al. at Crypto 2004 that is based on maximum likelihood approach and cannot estimate the success probability. By applying the framework to SIMON, we get various attack results, including the first full-round attacks on SIMON 64/128 and SIMON 128/256 Then we justify our claims by presenting experimental results that are close to expected ones with a small-scale variant of SIMON.
Expand
Sanjam Garg, Akshayaram Srinivasan
ePrint Report ePrint Report
We construct an adaptively secure garbling scheme with an online communication complexity of $n+m+\mathsf{poly}(\log |C|, \sec)$ where $C: \{0,1\}^n \rightarrow \{0,1\}^{m}$ is the circuit being garbled, and $\sec$ is the security parameter. The security of our scheme can be based on (polynomial hardness of) the Computational Diffie-Hellman (CDH) assumption, or the Factoring assumption or the Learning with Errors assumption. This is nearly the best achievable in the standard model (i.e., without random oracles) as the online communication complexity must be larger than both $n$ and $m$. The online computational complexity of our scheme is $O(n+m)+\mathsf{poly}(\log |C|, \sec)$. Previously known standard model adaptively secure garbling schemes had asymptotically worse online cost or relied on exponentially hard computational assumptions.
Expand
Tim Fritzmann, Thomas P\"oppelmann, Johanna Sepulveda
ePrint Report ePrint Report
Lattice problems allow the construction of very efficient key exchange and public-key encryption schemes. When using the Learning with Errors (LWE) or Ring-LWE (RLWE) problem such schemes exhibit an interesting trade-off between decryption error rate and security. The reason is that secret and error distributions with a larger standard deviation lead to better security but also increase the chance of decryption failures. As a consequence, various message/key encoding or reconciliation techniques have been proposed that usually encode one payload bit into several coefficients. In this work, we analyze how error-correcting codes can be used to enhance the error resilience of protocols like NewHope, Frodo, or Kyber. For our case study, we focus on the recently introduced NewHope Simple and propose and analyze four different options for error correction: i) BCH code; ii) combination of BCH code and additive threshold encoding; iii) LDPC code; and iv) combination of BCH and LDPC code. We show that lattice-based cryptography can profit from classical and modern codes by combining BCH and LDPC codes. This way we achieve quasi-error-free communication and increase the estimated bit-security against quantum attacks by 20.39% and decrease the communication overhead by 12.8%.
Expand
Ilan Komargodski, Eylon Yogev
ePrint Report ePrint Report
The random oracle paradigm allows us to analyze the security of protocols and construction in an idealized model, where all parties have access to a truly random function. This is one of the most successful and well-studied models in cryptography. However, being such a strong idealized model, it is known to be susceptible to various weaknesses when implemented naively in ``real-life'', as shown by Canetti, Goldreich and Halevi (J. ACM 2004).

As a counter-measure, one could try to identify and implement only one or few of the properties a random oracle possesses that are needed for a specific setting. Such a systematic study was initiated by Canetti (CRYPTO 1997), who showed how to implement the property that the output of the function does not reveal anything regarding the input by constructing a point function obfucator. This property turned out to suffice in many follow-up works and applications.

In this work, we tackle another natural property of random oracles and implement it in the standard model. The property we focus on is non-malleability, where it is guaranteed that the output on an input cannot be used to generate the output on any related point. We construct a point-obfuscator that is both point-hiding (a la Canetti) {\em and} is non-malleable. The cost of our construction is a single exponentiation on top of Canetti's construction and could be used for any application where point obfuscators are used and obtain improved security guarantees. The security of our construction relies on variants of the DDH and power-DDH assumptions.

On the technical side, we introduce a new technique for proving security of a construction based on a DDH-like assumption. We call this technique ``double-exponentiation'' and believe it will be useful in the future.
Expand

08 February 2018

Amos Beimel, Eyal Kushilevitz, Pnina Nissim
ePrint Report ePrint Report
We study the efficiency of computing arbitrary k-argument functions in the Private Simultaneous Messages (PSM) model of (Feige et al. STOC'94, Ishai and Kushilevitz ISTCS'97). This question was recently studied by (Beimel et al. TCC'14), in the two-party case (k = 2). We tackle this question in the general case of PSM protocols for k > 2 parties. Our motivation is two-fold: On one hand, there are various applications (old and new) of PSM protocols for constructing other cryptographic primitives, where obtaining more efficient PSM protocols imply more efficient primitives. On the other hand, improved PSM protocols are an interesting goal on its own. In particular, we pay a careful attention to the case of small number of parties (e.g., k = 3,4, 5), which may be especially interesting in practice, and optimize our protocols for those cases.

Our new upper bounds include a k-party PSM protocol, for any k > 2 and any function f : [N]^k --> {0; 1}, of complexity O(poly(k) N^{k/2}) (compared to the previous upper bound of O(poly(k) N^{k-1})), and even better bounds for small values of k; e.g., an O(N) PSM protocol for the case k = 3. We also handle the more involved case where different parties have inputs of different sizes, which is useful both in practice and for applications.

As applications, we obtain more efficient Non-Interactive secure Multi-Party (NIMPC) protocols (a variant of PSM, where some of the parties may collude with the referee (Beimel et al. CRYPTO'14)), improved ad-hoc PSM protocols (another variant of PSM, where the subset of participating parties is not known in advance (Beimel et al. ITCS'16, Beimel et al. EUROCRYPT'17)), secret-sharing schemes for strongly-homogeneous access structures with smaller share size than previously known, and better homogeneous distribution designs (Beimel et al. ITCS'16), a primitive with many cryptographic applications on its own.
Expand
Joel Alwen, Jeremiah Blocki, Krzysztof Pietrzak
ePrint Report ePrint Report
Memory-hard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs are egalitarian, in the sense that evaluating them on dedicated hardware (like FPGAs or ASICs) is not much cheaper than on off-the-shelf hardware (like x86 CPUs). MHFs have interesting cryptographic applications, most notably to password hashing and securing blockchains.

Alwen and Serbinenko [STOC'15] define the cumulative memory complexity (cmc) of a function as the sum (over all time-steps) of the amount of memory required to compute the function. They advocate that a good MHF must have high cmc. Unlike previous notions, cmc takes into account that dedicated hardware might exploit amortization and parallelism. Still, cmc has been critizised as insufficient, as it fails to capture possible time-memory trade-offs; as memory cost doesn't scale linearly, functions with the same cmc could still have very different actual hardware cost.

In this work we address this problem, and introduce the notion of sustained-memory complexity, which requires that any algorithm evaluating the function must use a large amount of memory for many steps. We construct functions (in the parallel random oracle model) whose sustained-memory complexity is almost optimal: our function can be evaluated using $n$ steps and $O(n/\log(n))$ memory, in each step making one query to the (fixed-input length) random oracle, while any algorithm that can make arbitrary many parallel queries to the random oracle, still needs $\Omega(n/\log(n))$ memory for $\Omega(n)$ steps.

As has been done for various notions (including cmc) before, we reduce the task of constructing an MHFs with high sustained-memory complexity to proving pebbling lower bounds on DAGs. Our main technical contribution is the construction is a family of DAGs on $n$ nodes with constant indegree with high ``sustained-space complexity", meaning that any parallel black-pebbling strategy requires $\Omega(n/\log(n))$ pebbles for at least $\Omega(n)$ steps.

Along the way we construct a family of maximally ``depth-robust" DAGs with maximum indegree $O(\log n)$, improving upon the construction of Mahmoody et al. [ITCS'13] which had maximum indegree $O\left(\log^2 n \cdot \mathsf{polylog}(\log n)\right)$.
Expand
Léo Ducas, Cécile Pierrot
ePrint Report ePrint Report
We propose a concrete family of dense lattices of arbitrary dimension n in which the lattice Bounded Distance Decoding (BDD) problem can be solved in deterministic polynomial time. This construction is directly adapted from the Chor-Rivest cryptosystem (1988). The lattice construction needs discrete logarithm computations that can be made in deterministic polynomial time for well-chosen parameters. Each lattice comes with a deterministic polynomial time decoding algorithm able to decode up to large radius. Namely, we reach decoding radius within O(log n) Minkowski’s bound, for both l1 and l2 norms.
Expand
Bin Zhang, Chao Xu, Willi Meier
ePrint Report ePrint Report
Modern stream ciphers often adopt a large internal state to resist various attacks, where the cryptanalysts have to deal with a large number of variables when mounting state recovery attacks. In this paper, we propose a general new cryptanalytic method on stream ciphers, called fast near collision attack, to address this situation. It combines a near collision property with the divide-and-conquer strategy so that only subsets of the internal state, associated with different keystream vectors, are recovered first and merged carefully later to retrieve the full large internal state. A self-contained method is introduced and improved to derive the target subset of the internal state from the partial state difference efficiently. As an application, we propose a new key recovery attack on Grain v1, one of the $7$ finalists selected by the eSTREAM project, in the single-key setting. Both the pre-computation and the online phases are tailored according to its internal structure, to provide an attack for any fixed IV in $2^{75.7}$ cipher ticks after the pre-computation of $2^{8.1}$ cipher ticks, given $2^{28}$-bit memory and about $2^{19}$ keystream bits. Practical experiments on Grain v1 itself whenever possible and on a 80-bit reduced version confirmed our results.
Expand
Benny Applebaum, Thomas Holenstein, Manoj Mishra, Ofer Shayevitz
ePrint Report ePrint Report
Private Simultaneous Message (PSM) protocols were introduced by Feige, Kilian and Naor (STOC '94) as a minimal non-interactive model for information-theoretic three-party secure computation. While it is known that every function $f:\{0,1\}^k\times \{0,1\}^k \rightarrow \{0,1\}$ admits a PSM protocol with exponential communication of $2^{k/2}$ (Beimel et al., TCC '14), the best known (non-explicit) lower-bound is $3k-O(1)$ bits. To prove this lower-bound, FKN identified a set of simple requirements, showed that any function that satisfies these requirements is subject to the $3k-O(1)$ lower-bound, and proved that a random function is likely to satisfy the requirements.

We revisit the FKN lower-bound and prove the following results:

(Counterexample) We construct a function that satisfies the FKN requirements but has a PSM protocol with communication of $2k+O(1)$ bits, revealing a gap in the FKN proof.

(PSM lower-bounds) We show that, by imposing additional requirements, the FKN argument can be fixed leading to a $3k-O(\log k)$ lower-bound for a random function. We also get a similar lower-bound for a function that can be computed by a polynomial-size circuit (or even polynomial-time Turing machine under standard complexity-theoretic assumptions). This yields the first non-trivial lower-bound for an explicit Boolean function partially resolving an open problem of Data, Prabhakaran and Prabhakaran (Crypto '14, IEEE Information Theory '16). We further extend these results to the setting of imperfect PSM protocols which may have small correctness or privacy error.

(CDS lower-bounds) We show that the original FKN argument applies (as is) to some weak form of PSM protocols which are strongly related to the setting of Conditional Disclosure of Secrets (CDS). This connection yields a simple combinatorial criterion for establishing linear $\Omega(k)$-bit CDS lower-bounds. As a corollary, we settle the complexity of the Inner Product predicate resolving an open problem of Gay, Kerenidis, and Wee (Crypto '15).
Expand
Shahram Khazaei
ePrint Report ePrint Report
Information ratio, which measures the maximum/average share size per shared bit, is a criterion of efficiency of a secret sharing scheme. It is generally believed that there exists a family of access structures such that the information ratio of any secret sharing scheme realizing it is $2^{\Omega(n)}$, where the parameter $n$ stands for the number of participants. The best known lower bound, due to Csirmaz (1994), is incompetently $\Omega(n/\log n)$. Closing this gap is a long-standing open problem in cryptology.

In this paper, using a technique called substitution, we recursively construct a family of access structures having information ratio $n^{\Omega(\log n/\log \log n)}$, assuming a well-stated information-theoretic conjecture is true. Our conjecture emerges after introducing the notion of convec set for an access structure, a subset of $n$-dimensional real space. We prove several topological properties about convec sets and raise several open problems.
Expand
Rui Zong, Xiaoyang Dong, Xiaoyun Wang
ePrint Report ePrint Report
In this paper, we study the relation of single-key impossible differentials with the related-tweakey/key ones and propose an interesting algorithm that can efficiently derive longer related-tweakey/key impossible differentials from single-key ones. With application of the MILP technique, the algorithm can be converted an automatic tool for searching related-tweakey/key impossible differentials.

We use this automatic tool to analyze QARMA-64 and give a 11-round key recovery attack, which attacks one more round than the best previous result. Moreover, we also analyze Joltik-BC-128, a internal tweakable block cipher of an authenticated encryption candidate of the CAESAR competition Joltik and our result can attack two more rounds than the result given by the cipher designers.
Expand

07 February 2018

Jeju, Rep. Of Korea, 25 October - 28 October 2018
Event Calendar Event Calendar
Event date: 25 October to 28 October 2018
Submission deadline: 17 June 2018
Notification: 3 August 2018
Expand
Nanyang Technological University, Singapore
Job Posting Job Posting
Young and research intensive, Nanyang Technological University (NTU Singapore) is ranked 11th globally. It is also placed 1st amongst the world\'s best young universities.

The Division of Mathematical Sciences at NTU invites applications for a position at the rank of Assistant Professor in Mathematics. We are looking for expertise in Coding Theory and Cryptography. Candidates should have an outstanding research record as well as extensive teaching experience.

Application Procedure

See http://www1.spms.ntu.edu.sg/~maths/Career/FacultyOpening.html

Applications will be accepted until the position is filled. All applications and materials submitted will be held in strict confidence.

Closing date for applications: 30 April 2018

Contact: Bernhard Schmidt, bernhard (at) ntu.edu.sg

More information: http://www1.spms.ntu.edu.sg/~maths/Career/FacultyOpening.html

Expand
Baiyu Li, Daniele Micciancio
ePrint Report ePrint Report
We present the first computationally sound symbolic analysis of Yao's garbled circuit construction for secure two party computation. Our results include an extension of the symbolic language for cryptographic expressions from previous work on computationally sound symbolic analysis, and a soundness theorem for this extended language. We then demonstrate how the extended language can be used to formally specify not only the garbled circuit construction, but also the formal (symbolic) simulator required by the definition of security. The correctness of the simulation is proved in a purely syntactical way, within the symbolic model of cryptography, and then translated into a concrete computational indistinguishability statement via our general computational soundness theorem. We also implement our symbolic security framework and the garbling scheme in Haskell, and our experiment shows that the symbolic analysis performs well and can be done within several seconds even for large circuits that are useful for real world applications.
Expand
Tomas Fabsic, Viliam Hromada, Pavol Zajac
ePrint Report ePrint Report
We propose a new reaction attack on the public-key cryptosystem LEDApkc. The adversary uses the decoding failure rate (DFR) analysis to learn information about the secret masking matrix $Q$. Provided the adversary learns information about $Q$ within $10^4\times \text{DFR}^{-1}$ decryptions (as prescribed by LEDApkc design to thwart previously known attacks), the adversary builds a small set of candidates for $Q$. Using these candidates, the adversary obtains candidates for a generator matrix of the secret LDPC code. Afterwards, the adversary applies Stern's algorithm to recover the secret matrix $H$, thus recovering the full private key.

Provided the adversary can learn information about the matrix $Q$, the complexity of the attack is below $2^{99}$ for a parameter set for 128-bit security. In order to study whether the adversary can learn information about $Q$ from $10^4\times \text{DFR}^{-1}$ decryptions, we conducted experiments with a modified parameter set. The parameter set was modified only in order to increase the DFR, and thus make experiments less computationally expensive. We show that with the modified parameter set it is indeed possible to learn the required information about the matrix $Q$.
Expand
Wen-jie Lu, Jun Sakuma
ePrint Report ePrint Report
Machine learning algorithms are used by more and more online applications to improve the services. Machine learning-based online services are usually accessed by thousands of clients concurrently through a relatively narrow bandwidth, such as a WiFi network or a cell phone network. When applying secure computations to such online services, however, current methods for generating multiplication triplets might take a long time, especially when only a narrow bandwidth is available or large-scale matrices are involved in the computation. In this paper, we present a more practical method for generating multiplication triplets that are specified for additively shared matrices from homomorphic encryption. With our algorithmic and implement optimizations, our protocol is faster than and consumes less communication traffic than the existing methods. Experimental results show that, under a 100~Mbps network, our protocol took about $18.0$ seconds to generate triplets for matrices with more than $2.6\times 10^5$ entries. It was about $20 - 108$ times faster than existing methods. As the concrete example, we applied our protocol to two existing secure computation frameworks of machine learning, i.e., SecureML (S\&P'17) and MiniONN (CCS'17). Experimental results show that our method reduced about $74\% - 97\%$ of the triplet generation time of these frameworks when a narrow bandwidth was used.
Expand
Christian Badertscher, Juan Garay, Ueli Maurer, Daniel Tschudi, Vassilis Zikas
ePrint Report ePrint Report
An exciting recent line of work has focused on formally investigating the core cryptographic assumptions underlying the security of Bitcoin. In a nutshell, these works conclude that Bitcoin is secure if and only if the majority of the mining power is honest. Despite their great impact, however, these works do not address an incisive question asked by positivists and Bitcoin critics, which is fuelled by the fact that Bitcoin indeed works in reality: Why should the real-world system adhere to these assumptions?

In this work we employ the machinery from the Rational Protocol Design (RPD) framework by Garay et al.(FOCS'13) to analyze Bitcoin and address questions such as the above. We show that under the natural class of incentives for the miners' behavior---i.e., rewarding them for adding blocks to the blockchain but having them pay for mining---we can reserve the honest majority assumption as a fallback, or even, depending on the application, completely replace it by the assumption that the miners aim to maximize their revenue.

Our results underscore the appropriateness of RPD as a ``rational cryptography'' framework for analyzing Bitcoin. Along the way, we devise significant extensions to the original RPD machinery that broaden its applicability to cryptocurrencies, which may be of independent interest.
Expand
Pratik Soni, Stefano Tessaro
ePrint Report ePrint Report
We study the complexity of building secure block ciphers in the setting where the key is known to the attacker. In particular, we consider two security notions with useful implications, namely public-seed pseudorandom permutations (or psPRPs, for short) (Soni and Tessaro, EUROCRYPT '17) and correlation-intractable ciphers (Knudsen and Rijmen, ASIACRYPT '07; Mandal, Seurin, and Patarin, TCC '12).

For both these notions, we exhibit constructions which make only two calls to an underlying non-invertible primitive, matching the complexity of building a pseudorandom permutation in the secret-key setting. Our psPRP result instantiates the round functions in the Naor-Reingold (NR) construction with a secure UCE hash function. For correlation intractability, we instead instantiate them from a (public) random function, and replace the pairwise-independent permutations in the NR construction with (almost) $O(k^2)$-wise independent permutations, where $k$ is the arity of the relations for which we want correlation intractability.

Our constructions improve upon the current state of the art, requiring five- and six-round Feistel networks, respectively, to achieve psPRP security and correlation intractability. To do so, we rely on techniques borrowed from Impagliazzo-Rudich-style black-box impossibility proofs for our psPRP result, for which we give what we believe to be the first constructive application, and on techniques for studying randomness with limited independence for correlation intractability.
Expand
Priyanka Bose, Viet Tung Hoang, Stefano Tessaro
ePrint Report ePrint Report
This paper revisits the multi-user (mu) security of symmetric encryption, from the perspective of delivering an analysis of the AES-GCM-SIV AEAD scheme. Our end result shows that its mu security is comparable to that achieved in the single-user setting. In particular, even when instantiated with short keys (e.g., 128 bits), the security of AES-GCM-SIV is not impacted by the collisions of two user keys, as long as each individual nonce is not re-used by too many users. Our bounds also improve existing analyses in the single-user setting, in particular when messages of variable lengths are encrypted. We also validate security against a general class of key-derivation methods, including one that halves the complexity of the final proposal.

As an intermediate step, we consider mu security in a setting where the data processed by every user is bounded, and where user keys are generated according to arbitrary, possibly correlated distributions. This viewpoint generalizes the currently adopted one in mu security, and can be used to analyze re-keying practices.
Expand
◄ Previous Next ►