International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

22 October 2018

C Ashokkumar, Bholanath Roy, M Bhargav Sri Venkatesh, Bernard L Menezes
ePrint Report ePrint Report
Several successful cache-based attacks have provided strong impetus for developing side channel resistant software implementations of AES. One of the best-known countermeasures - use of a "minimalist" 256-byte look-up table - has been employed in the latest (assembly language) versions. Software and hardware prefetching and out-of-order execution in modern processors have served to further shrink the attack surface. Despite these odds, we devise and implement two strategies to retrieve the complete AES key. The first uses adaptively chosen plaintext and random plaintext in a 2-round attack. The second strategy employs only about 50 blocks of random plaintext in a novel single round attack. The attack can be extended to spying on table accesses during decryption in a ciphertext-only attack. We also present an analytical model to explain the effect of false positives and false negatives and capture various practical tradeoffs involving number of blocks of plaintext, offline computation time for key retrieval and success probability.
Expand
Sergiu Carpov, Caroline Fontaine, Damien Ligier, Renaud Sirdey
ePrint Report ePrint Report
Functional encryption (FE) is a cryptographic primitive which allows to partially decrypt ciphertexts, e.g. evaluate a function over encrypted inputs and obtain the output in clear. The downside of employing FE schemes is that some details about input data ``leak''. We call information leakage of a FE scheme the maximal information one can gain about input data from the clear-text output of FE evaluated function.

FE which are usable in practice support only limited functionalities, in particular linear or quadratic polynomial evaluation. In a first contribution of this work we describe how to combine a quadratic FE scheme with a classification algorithm in order to perform a classification over encrypted data use-case. Compared to direct usage of FE for a linear or a polynomial classifier our method allows to increase classification accuracy and/or decrease the number of used FE secret keys.

In a second contribution we show how to estimate the information leakage of the classification use-case and how to compare it to an ideal information leakage. The ideal information leakage is the minimal information leakage intrinsic to achieve the use-case requirement (e.g. perform a classification task). We introduce a method for estimating the information leakage (real and ideal ones) based on machine learning techniques, in particular on neural networks.

We perform extensive experimentations using MNIST image classification and Census Income datasets. In the case of MNIST, we were able to reconstruct images which are close (in terms of MSE distance and as well as visually) to original images. The knowledge of someones handwriting style facilitate the possibility to impersonate him, to steal his identity, etc. As for the second dataset, we were able to increase the accuracy of predicting input dataset features (e.g. an individual's race) from FE outputs available in clear. Obtained information leakages represent a major security flaw of FE based classifiers because they reveal sensible information about individuals.
Expand
Nuttapong Attrapadung, Takahiro Matsuda, Ryo Nishimaki, Shota Yamada, Takashi Yamakawa
ePrint Report ePrint Report
We present a construction of an adaptively single-key secure constrained PRF (CPRF) for $\mathbf{NC}^1$ assuming the existence of indistinguishability obfuscation (IO) and the subgroup hiding assumption over a (pairing-free) composite order group. This is the first construction of such a CPRF in the standard model without relying on a complexity leveraging argument.

To achieve this, we first introduce the notion of partitionable CPRF, which is a CPRF accommodated with partitioning techniques and combine it with shadow copy techniques often used in the dual system encryption methodology. We present a construction of partitionable CPRF for $\mathrm{NC}^1$ based on IO and the subgroup hiding assumption over a (pairing-free) group. We finally prove that an adaptively single-key secure CPRF for $\mathbf{NC}^1$ can be obtained from a partitionable CPRF for $\mathbf{NC}^1$ and IO.
Expand
Ximing Fu, Xiaoyun Wang, Xiaoyang Dong, Willi Meier, Yonglin Hao, Boxin Zhao
ePrint Report ePrint Report
At CRYPTO 2018, we proposed a method to reduce the Boolean polynomial of 855-round Trivium. By multiplying a polynomial reduction factor, the output Boolean polynomial is simplified. Based on this method, a 855-round key-recovery attack on Trivium is introduced. In addition, we also give a practical attack on 721-round Trivium to show some rationality and evidence.

However, Yonglin Hao et al. find some errors in the 721-round attack recently. As a correction, we propose some new right 721-round example attacks based on our method proposed at CRYPTO 2018.
Expand
Chen Li
ePrint Report ePrint Report
For years, researchers have been engaged in finding new cryptography schemes with high security and efficiency that can resist against the attacking from quantum computers. Lattice-based cryptography scheme is believed as a promising candidate. But to achieve both high efficiency and high security is not easy. Until recently, some Lattice-based schemes with enough efficiency have been proposed and submitted to the post-quantum cryptography standardization project that initiated by NIST. Streamlined NTRU Prime is one of them. Basing on a new``strong" ring and applying the ``modern key encapsulation mechanism" approach, Streamlined NTRU Prime aims to provide IND-CCA security.

However, in this paper, we identify a simple property of the new ``strong" ring. Using this property and also taking advantage of the information leakage from the decapsulation feedback, we provide an efficient key recovery attack on the Streamlined NTRU Prime. Our attack does not only break most instances of Streamlined NTRU Prime, but also shows an evidence that modifying a public key encryption scheme into a key encapsulation mechanism scheme does not naturally provide higher security.
Expand
Leonid Reyzin, Adam Smith, Sophia Yakoubov
ePrint Report ePrint Report
We explore large-scale fault-tolerant multiparty computation on a minimal communication graph. Our goal is to be able to privately aggregate data from thousands of users - for example, in order to obtain usage statistics from users' phones. To reflect typical phone deployments, we limit communication to the star graph (so that all users only talk to a single central server). To provide fault-tolerance, we require the computation to complete even if some users drop out mid-computation, which is inevitable if the computing devices are personally owned smartphones. Variants of this setting have been considered for the problem of secure aggregation by Chan et al. (Financial Cryptography 2012) and Bonawitz et al. (CCS 2017). We call this setting Large-scale One-server Vanishing-participants Efficient MPC (LOVE MPC).

We show that LOVE MPC requires at least three message flows, and that a three-message protocol requires some setup (such as a PKI). We then build LOVE MPC with optimal round- and communication- complexity (assuming semi-honest participants and a deployed PKI), using homomorphic ad hoc threshold encryption (HATE). We build the first HATE scheme with constant-size ciphertexts (although the public key length is linear in the number of users). Unfortunately, this construction is merely a feasibility result, because it relies on differing-inputs obfuscation.

We also construct more practical three- and five- message LOVE MPC in the PKI model for addition or multiplication. Unlike in the obfuscation-based construction, the per user message length in these protocols is linear in the number of users. However, the five-message protocol still has constant amortized message length, because only the first two messages are long, but they need to be exchanged only once (i.e., are input-independent and reusable) and thus can be viewed as setup.
Expand
Thomas Debris-Alazard, Nicolas Sendrier, Jean-Pierre Tillich
ePrint Report ePrint Report
We present here Wave the first ``hash-and-sign'' code-based signature scheme which strictly follows the GPV strategy [GPV08]. It uses the family of ternary generalized $(U,U+V)$ codes. We prove that Wave achieves {\em existential unforgeability under adaptive chosen message attacks} (EUF-CMA) in the random oracle model (ROM) with a tight reduction to two assumptions from coding theory: one is a distinguishing problem that is related to the trapdoor we insert in our scheme, the other one is DOOM, a multiple target version of syndrome decoding. The algorithm produces uniformly distributed signatures through a suitable rejection sampling. Our scheme enjoys efficient signature and verification algorithms. For 128 bits of classical security, signature are $8$ thousand bits long and the public key size is slightly smaller than one megabyte. Furthermore, with our current choice of parameters, the rejection rate is limited to one rejection every 3 or 4 signatures.
Expand
Shuai Zhou, Haiyang Xue, Daode Zhang, Kunpeng Wang, Xianhui Lu, Bao Li, Jingnan He
ePrint Report ePrint Report
The Number Theoretic Transform (NTT) provides efficient algorithm for multiplying large degree polynomials. It is commonly used in cryptographic schemes that are based on the hardness of the Ring Learning With Errors problem (RLWE), which is a popular basis for post-quantum key exchange, encryption and digital signature. To apply NTT, modulus q should satisfy that q ≡ 1 mod 2n, RLWE- based schemes have to choose an oversized modulus, which leads to excessive bandwidth. In this work, we present “Preprocess-then-NTT (Pt-NTT)” technique which weakens the limitation of modulus q, i.e., we only require q ≡ 1 mod n or q ≡ 1 mod n/2. Based on this technique, we provide new parameter settings for KYBER and NEWHOPE (two NIST candidates). In these new schemes, we can reduce public key size and ciphertext size at a cost of very little efficiency loss.
Expand
Long Chen, Qiang Tang
ePrint Report ePrint Report
Message franking enables a receiver to report a potential abuse in a secure messaging system which employs an end to end encryption. Such mechanism is crucial for accountability and is already widely adopted in real world product such as Facebook messenger. Grubs et al initiated a systematic study of such a new primitive, and Dodis et al gave a more efficient construction.

We observe that in all existing message franking schemes, the receiver has to reveal the whole communication for a session in order to report one abuse. This is highly undesirable in many settings where revealing other non-abusive part of the communication leaks too much information; what is worse, a foxy adversary may intentionally mixing private information of the receiver with the abusive message so that the receiver will be reluctant to report. This essentially renders the abuse reporting mechanism ineffective.

To tackle this problem, we propose a new primitive called targeted opening compactly committing AEAD (TOCE for short). In a TOCE, the receiver can select arbitrary subset of bits from the plaintext to reveal during opening, while keep all the rest still secure as in an authenticated encryption. We gave a careful formulation and give a generic construction. The generic construction allowing a bit level opening may require a substantial number of passes of symmetric key ciphers when encrypting a large message such as a picture. We thus further set forth and give a more efficient non-black-box construction allowing a block-level (e.g., 256 bit) opening. We also propose a privacy-efficiency trade off if we can relax the security of non-opened messages to be one way secure (they are still semantically secure if no opening).
Expand
Viet Tung Hoang, Stefano Tessaro, Aishwarya Thiruvengadam
ePrint Report ePrint Report
Multi-user (mu) security considers large-scale attackers (e.g., state actors) that given access to a number of sessions, attempt to compromise {\em at least} one of them. Mu security of authenticated encryption (AE) was explicitly considered in the development of TLS 1.3.

This paper revisits the mu security of GCM, which remains to date the most widely used dedicated AE mode. We provide new concrete security bounds which improve upon previous work by adopting a refined parameterization of adversarial resources that highlights the impact on security of (1) nonce re-use across users and of (2) re-keying.

As one of the main applications, we give tight security bounds for the nonce-randomization mechanism adopted in the record protocol of TLS 1.3 as a mitigation of large-scale multi-user attacks. We provide tight security bounds that yield the first validation of this method. In particular, we solve the main open question of Bellare and Tackmann (CRYPTO '16), who only considered restricted attackers which do not attempt to violate integrity, and only gave non-tight bounds.
Expand
Vivek Bagaria, Sreeram Kannan, David Tse, Giulia Fanti, Pramod Viswanath
ePrint Report ePrint Report
Transaction throughput, confirmation latency and confirmation reliability are fundamental performance measures of any blockchain system in addition to its security. In a decentralized setting, these measures are limited by two underlying physical network attributes: communication capacity and speed-of-light propagation delay. Existing systems operate far away from these physical limits. In this work we introduce Prism, a new proof-of-work blockchain protocol, which can achieve 1) security against up to 50% adversarial hashing power; 2) optimal throughput up to the capacity C of the network; 3) confirmation latency for honest transactions proportional to the propagation delay D, with confirmation error probability exponentially small in CD ; 4) eventual total ordering of all transactions. Our approach to the design of this protocol is based on deconstructing the blockchain into its basic functionalities and systematically scaling up these functionalities to approach their physical limits.
Expand
Francesco Berti, Olivier Pereira, Thomas Peters
ePrint Report ePrint Report
Authenticated Encryption ($\mathsf{AE}$) achieves confidentiality and authenticity, the two most fundamental goals of cryptography, in a single scheme. A common strategy to obtain $\mathsf{AE}$ is to combine a Message Authentication Code $(\mathsf{MAC})$ and an encryption scheme, either nonce-based or $\mathsf{iv}$-based. Out of the 180 possible combinations, Namprempre et al.~[25] proved that 12 were secure, 164 insecure and 4 were left unresolved: A10, A11 and A12 which use an $\iv$-based encryption scheme and N4 which uses a nonce-based one. The question of the security of these composition modes is particularly intriguing as N4, A11, and A12 are more efficient than the 12 composition modes that are known to be provably secure.\\ We prove that: $(i)$ N4 is not secure in general, $(ii)$ A10, A11 and A12 have equivalent security, $(iii)$ A10, A11, A12 and N4 are secure if the underlying encryption scheme is either misuse-resistant or ``message malleable'', a property that is satisfied by many classical encryption modes, $(iv)$ A10, A11 and A12 are insecure if the underlying encryption scheme is stateful or untidy.\\ All the results are quantitative.
Expand
Prastudy Fauzi, Sarah Meiklejohn, Rebekah Mercer, Claudio Orlandi
ePrint Report ePrint Report
Despite their usage of pseudonyms rather than persistent identifiers, most existing cryptocurrencies do not provide users with any meaningful levels of privacy. This has prompted the creation of privacy-enhanced cryptocurrencies such as Monero and Zcash, which are specifically designed to counteract the tracking analysis possible in currencies like Bitcoin. These cryptocurrencies, however, also suffer from some drawbacks: in both Monero and Zcash, the set of potential unspent coins is always growing, which means users cannot store a concise representation of the blockchain. In Zcash, furthermore, users cannot deny their participation in anonymous transactions.

In this paper, we address both of these limitations. By combining a technique we call updatable keys with efficient zero-knowledge arguments, we propose a new cryptocurrency, QuisQuis, that achieves provably secure notions of anonymity while still allowing users to deny participation and store a relatively small amount of data.
Expand

19 October 2018

Public comments on draft report thru Oct 22
Announcement Announcement
NIST is considering the standardization of threshold schemes for cryptographic primitives. Your feedback and participation is welcome. Below are some useful links and dates: For questions or comments, contact threshold-crypto (at) nist.gov
Expand

18 October 2018

Bohdan Kovalenko, Anton Kudin
ePrint Report ePrint Report
Context. Methods of known kleptography implementations are being investigated. The article focuses mostly on SETUP design of subliminal data leakage channels.

Aim. Suggest approaches to develop SETUP resistant cryptosystems.

Methods. The necessary conditions for SETUP implementation are building in entropy source (otherwise generated secret will be predictable). In this article, it's considered subscriber whose protocol implementation is suspected to be modified by Developer (the malicious actor who is able to influence on cryptosystem implementation) to create subliminal leakage channel. The possible countermeasure is to prohibit usage own random sources for subscribers, enforce generate random values from public counters. %them to use external Trusted Random Number Generation service.

Results. The formal model for basic SETUP scheme has been suggested. Approach to develop SETUP resistant protocols has been described. Two basic SETUP-resistance protocols (nonce generation protocol and Diffie-Hellman key agreement protocol) have been proposed.
Expand
Daniele Micciancio
ePrint Report ePrint Report
We give a simple proof that the decisional Learning With Errors (LWE) problem with binary secrets (and an arbitrary polynomial number of samples) is at least as hard as the standard LWE problem (with unrestricted, uniformly random secrets, and a bounded, quasi-linear number of samples). This proves that the binary-secret LWE distribution is pseudorandom, under standard worst-case complexity assumptions on lattice problems. Our results are similar to those proved by (Brakerski, Langlois, Peikert, Regev and Stehle, STOC 2013), but provide a shorter, more direct proof, and a small improvement in the noise growth of the reduction.
Expand
Yehuda Lindell, Ariel Nof, Samuel Ranellucci
ePrint Report ePrint Report
ECDSA is a standardized signing algorithm that is widely used in TLS, code signing, cryptocurrency and more. Due to its importance, the problem of securely computing ECDSA in a distributed manner (known as threshold signing) has received considerable interest. However, despite this interest, there is still no full threshold solution for more than 2 parties (meaning that any $t$-out-of-$n$ parties can sign, security is preserved for any $t-1$ or fewer corrupted parties, and $t\leq n$ can be any value thus supporting an honest minority) that has practical key distribution. This is due to the fact that all previous solutions for this utilize Paillier homomorphic encryption, and efficient distributed Paillier key generation for more than two parties is not known.

In this paper, we present the first truly practical full threshold ECDSA signing protocol that has both fast signing and fast key distribution. This solves a years-old open problem, and opens the door to practical uses of threshold ECDSA signing that are in demand today. One of these applications is the construction of secure cryptocurrency wallets (where key shares are spread over multiple devices and so are hard to steal) and cryptocurrency custody solutions (where large sums of invested cryptocurrency are strongly protected by splitting the key between a bank/financial institution, the customer who owns the currency, and possibly a third-party trustee, in multiple shares at each). There is growing practical interest in such solutions, but prior to our work these could not be deployed today due to the need for distributed key generation.
Expand
Sam Kim, David J. Wu
ePrint Report ePrint Report
A software watermarking scheme enables one to embed a "mark" (i.e., a message) within a program while preserving the program's functionality. Moreover, there is an extraction algorithm that recovers an embedded message from a program. The main security goal is that it should be difficult to remove the watermark without destroying the functionality of the program. Existing constructions of watermarking focus on watermarking cryptographic functions like pseudorandom functions (PRFs); even in this setting, realizing watermarking from standard assumptions remains difficult. The first lattice-based construction of secret-key watermarking due to Kim and Wu (CRYPTO 2017) only ensures mark-unremovability against an adversary who does not have access to the mark-extraction oracle. The construction of Quach et al. (TCC 2018) achieves the stronger notion of mark-unremovability even if the adversary can make extraction queries, but has the drawback that the watermarking authority (who holds the watermarking secret key) can break pseudorandomness of all PRF keys in the family (including unmarked keys).

In this work, we construct new lattice-based secret-key watermarking schemes for PRFs that both provide unremovability against adversaries that have access to the mark-extraction oracle and offer a strong and meaningful notion of pseudorandomness even against the watermarking authority (i.e., the outputs of unmarked keys are pseudorandom almost everywhere). Moreover, security of several of our schemes can be based on the hardness of computing quasi-polynomial approximations to worst-case lattice problems. This is a qualitatively weaker assumption than that needed for existing lattice-based constructions of watermarking (that support message-embedding), all of which require sub-exponential approximation factors. Our constructions rely on a new cryptographic primitive called an extractable PRF, which is of independent interest.
Expand
Kaushik Nath, Palash Sarkar
ePrint Report ePrint Report
Efficient scalar multiplication algorithms require a single finite field inversion at the end to convert from projective to affine coordinates. This inversion consumes a significant proportion of the total time. The present work makes a comprehensive study of inversion over Mersenne and pseudo-Mersenne prime order fields. Inversion algorithms for such primes are based on exponentiation which in turn requires efficient algorithms for multiplication, squaring and modulo reduction. From a theoretical point of view, we present a number of algorithms for multiplication/squaring and reduction leading to a number of different inversion algorithms which are appropriate for different settings. Our algorithms collect together and generalise ideas which are scattered across various papers and codes. At the same time, they also introduce new ideas to improve upon existing works. A key theoretical feature of our work, which is not present in previous works, is that we provide formal statements and detailed proofs of correctness of the different reduction algorithms that we describe. On the implementation aspect, a total of twenty primes are considered, covering all previously proposed cryptographically relevant (pseudo-)Mersenne prime order fields at various security levels. For each of these fields, we provide 64-bit assembly implementations of all the relevant inversion algorithms for a wide range of Intel processors. We were able to find previous 64-bit implementations of inversion for six of the twenty primes considered in this work. On the Haswell, Skylake and Kabylake processors of Intel, for all the six primes where previous implementations are available, our implementations outperform such previous implementations. The assembly codes that we have developed are publicly available and can be used as a plug-in to replace the inversion routines in existing softwares for scalar multiplication.
Expand
Maciej Skorski
ePrint Report ePrint Report
The recent progress in key derivation (Barak at al. CRYPTO'11, Dodis Yu TCC'2013) introduced the concept of constrained profiles for attackers advantage, recognizing that security bounds can be significantly improved (alternatively: lots of randomness can be saved) when the advantage, as the function of the key, is bounded in mean or variance. This paper studies \emph{minimal requirements for keys} to achieve security under such restricted attackers.

We frame the problem as characterizing \emph{pseudorandomness against constrained distinguishers} and show that minimal assumptions are respectively (a) high smooth min-entropy and (b) high smooth collision entropy. This matches the (folklore extension of) assumptions of previous works.

Besides providing lower bounds, we offer more insights into this key derivation problem and elegant proof techniques of geometric flavor.
Expand
◄ Previous Next ►