International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 May 2019

Tsz Hon Yuen, Shi-feng Sun, Joseph K. Liu, Man Ho Au, Muhammed F. Esgin, Qingzhao Zhang, Dawu Gu
ePrint Report ePrint Report
In this paper, we propose the most competent blockchain ring confidential transaction protocol (RingCT3.0) for protecting the privacy of the sender's identity, the recipient's identity and the confidentiality of the transaction amount. For a typical 2-input transaction with a ring size of 1024, the ring signature size of our RingCT3.0 protocol is 97% less than the ring signature size of the original RingCT1.0 protocol used in Monero. Taking the advantage of our compact RingCT3.0 transcript size, privacy-preserving cryptocurrencies can enjoy a much lower transaction fee which will have a significant impact to the crypto-economy. Our implementation result shows that our protocol outperforms existing solutions, in terms of efficiency and security.

In addition to the significant improvement in terms of efficiency, our scheme is proven secure in a stronger security model. We remove the trusted setup assumption used in RingCT2.0. Our scheme is anonymous against ring insider (non-signing users who are included in the ring), while we show that the RingCT1.0 is not secure in this strong model.

Our RingCT3.0 protocol relies on our brand new designed ring signature scheme as an underlying primitive, which is believed to be the most efficient ring signature scheme up-to-date (in terms of signature size) without trusted setup. Our ring signature scheme is derived from our novel design of an efficient set membership proof of n public keys, with the proof size of O(log n). It is the first set membership proof without trusted setup for public keys in the base group, instead of in the exponent. These two primitives are of independent interest.
Expand
Jiaxin Guan, Mark Zhandry
ePrint Report ePrint Report
The bounded storage model promises unconditional security proofs against computationally unbounded adversaries, so long as the adversary’s space is bounded. In this work, we develop simple new constructions of two-party key agreement, bit commitment, and oblivious transfer in this model. In addition to simplicity, our constructions have several advantages over prior work, including an improved number of rounds and enhanced correctness. Our schemes are based on Raz’s lower bound for learning parities.
Expand
Erik-Oliver Blass, Guevara Noubir
ePrint Report ePrint Report
Logging is a key mechanism in the security of computer systems. Beyond supporting important forward security properties, it is critical that logging withstands both failures and intentional tampering to prevent subtle attacks leaving the system in an inconsistent state with inconclusive evidence. We propose new techniques combining forward integrity with crash recovery for secure data storage. Our main contribution is a new coding scheme resolving unique design constraints such as forward integrity and most importantly a {single-pass, constant number} of operations per encoding. Our idea is to add a new log item by XORing it to forward-securely selected $k$ cells of a table. If up to a certain threshold of cells is modified by the adversary, or lost due to a crash, we still guarantee the recovery of all stored log items. We instantiate our scheme into an abstract data structure which allows to either detect adversarial modifications to log items or treat modifications like data loss in a system crash. The data structure can recover lost log items, thereby effectively reverting adversarial modifications. The key advantage of this setup is its efficiency: we use spectral graph theory techniques to prove that $k$ is {constant} in the number $n$ of all log items ever stored and small in practice, e.g., $k=5$. Moreover, we prove that to cope with up to $\sqrt{n}$ lost log items, storage expansion is asymptotically constant in $n$ and small in practice. For $k=5$, the total size of the table is only $12\%$ more than the simple concatenation of all $n$ items.
Expand
Felix Wegener, Thorben Moos, Amir Moradi
ePrint Report ePrint Report
In recent years, deep learning has become an attractive ingredient to side-channel analysis (SCA) due to its potential to improve the success probability or enhance the performance of certain frequently executed tasks. One task that is commonly assisted by machine learning techniques is the profiling of a device's leakage behavior in order to carry out a template attack. Very recently at CHES 2019, deep learning has also been applied to non-profiled scenarios, extending its reach within SCA beyond template attacks for the first time. The proposed method, called DDLA, has some tempting advantages over traditional SCA due to merits inherited from (convolutional) neural networks. Most notably, it greatly reduces the need for pre-processing steps when the SCA traces are misaligned or when the leakage is of a multivariate nature. However, similar to traditional attack scenarios the success of this approach highly depends on the correct choice of a leakage model and the intermediate value to target.

In this work we explore whether deep learning can similarly be used as an instrument to advance another crucial (non-profiled) discipline of SCA which is inherently independent of leakage models and targeted intermediates, namely leakage assessment. In fact, given the simple classification-based nature of common leakage assessment techniques, in particular distinguishing two groups fixed-vs-random or fixed-vs-fixed, it comes as a surprise that machine learning has not been brought into this context, yet. Our contribution is the development of a full leakage assessment methodology based on deep learning which gives the evaluator the freedom to not worry about location, alignment and statistical order of the leakages and that easily covers multivariate and horizontal patterns as well. We test our approach against a number of case studies based on FPGA measurements of the PRESENT block cipher, equipped with state-of-the-art hardware-based countermeasures. Our results clearly show that the proposed methodology and network structure (which remains unchanged between the experiments) outperform the classical detection approaches ($t$-test and $\chi^2$-test) in all considered scenarios.
Expand
Bernardo Magri, Christian Matt, Jesper Buus Nielsen, Daniel Tschudi
ePrint Report ePrint Report
Most existing blockchains either rely on a Nakamoto-style of consensus, where the chain can fork and produce rollbacks, or on a committee-based Byzantine fault tolerant (CBFT) consensus, where no rollbacks are possible. While the latter ones offer better consistency, the former can be more efficient, tolerate more corruptions, and offer better availability during bad network conditions. To achieve the best of both worlds, we initiate the formal study of finality layers. Such a finality layer can be combined with a Nakamoto-style blockchain and periodically declare blocks as final, preventing rollbacks beyond final blocks.

As conceptual contributions, we identify the following properties to be crucial for a finality layer: finalized blocks form a chain (chain-forming), all parties agree on the finalized blocks (agreement), the last finalized block does not fall too far behind the last block in the underlying blockchain (updated), and all finalized blocks at some point have been on the chain adopted by at least $k$ honest parties ($k$-support). We also put forward an argument why finality layers should be asynchronous or semi-synchronous.

As technical contributions, we propose two variants of a finality layer protocol. We prove both of them secure in the setting with $t < n/3$ Byzantine parties and a semi-synchronous network. The first variant satisfies all of the aforementioned requirements (with $k = 1$) when combined with an arbitrary blockchain that satisfies the usual common-prefix, chain-growth, and chain-quality properties. The other one needs an additional, mild assumption on the underlying blockchain, but is more efficient and satisfies $k = n/3$-support. We finally show that $t < n/3$ is optimal for semi-synchronous finality layers.
Expand
Guilhem Castagnos, Dario Catalano, Fabien Laguillaumie, Federico Savasta, Ida Tucker
ePrint Report ePrint Report
ECDSA is a widely adopted digital signature standard. Unfortunately, efficient distributed variants of this primitive are notoriously hard to achieve and known solutions often require expensive zero knowledge proofs to deal with malicious adversaries. For the two party case, Lindell [Lin17] recently managed to get an efficient solution which, to achieve simulation-based security, relies on an interactive, non standard, assumption on Paillier's cryptosystem.

In this paper we generalize Lindell's solution using hash proof systems. The main advantage of our generic method is that it results in a simulation-based security proof without resorting to non-standard interactive assumptions.

Moving to concrete constructions, we show how to instantiate our framework using class groups of imaginary quadratic fields. Our implementations show that the practical impact of dropping such interactive assumptions is minimal. Indeed, while for 128-bit security our scheme is marginally slower than Lindell's, for 256-bit security it turns out to be better both in key generation and signing time. Moreover, in terms of communication cost, our implementation significantly reduces both the number of rounds and the transmitted bits without exception.
Expand
Shi Bai, Shaun Miller, Weiqiang Wen
ePrint Report ePrint Report
The learning with errors (LWE) problem (STOC'05) introduced by Regev is one of the fundamental problems in lattice-based cryptography. One standard strategy to solve the LWE problem is to reduce it to a unique SVP (uSVP) problem via Kannan's embedding and then apply a lattice reduction to solve the uSVP problem. There are two methods for estimating the cost for solving LWE via this strategy: the first method considers the largeness of the gap in the uSVP problem (Gama-Nguyen, Eurocrypt'08) and the second method (Alkim et al., USENIX'16) considers the shortness of the projection of the shortest vector to the Gram-Schmidt vectors. These two estimates have been investigated by Albrecht et al. (Asiacrypt'16) who present a sound analysis and show that the lattice reduction experiments fit more consistently with the second estimate. They also observe that in some cases the lattice reduction even behaves better than the second estimate perhaps due to the second intersection of the projected vector with the Gram-Schmidt vectors. In this work, we revisit the work of Alkim et al. and Albrecht et al. We first report further experiments providing more comparisons and suggest that the second estimate leads to a more accurate prediction in practice. We also present empirical evidence confirming the assumptions used in the second estimate. Furthermore, we examine the gaps in uSVP derived from the embedded lattice and explain why it is preferable to use embedding height equal to 1 for the embedded lattice. This shows there is a coherent relation between the second estimate and the gaps in uSVP. Finally, it has been conjectured by Albrecht et al. that the second intersection will not happen for large parameters. We will show that this is indeed the case: there is no second intersection as the block size goes to infinity.
Expand
María Naya-Plasencia, André Schrottenloher
ePrint Report ePrint Report
The k-xor problem or Generalized Birthday Problem asks, given k lists of bit-strings, for a k-tuple among them XORing to 0. If the lists are unbounded, the best classical (exponential) time complexity has withstood since Wagner's CRYPTO 2002 paper, while many subsequent works have improved its memory complexity, given better time-memory tradeoffs, and logarithmic improvements.

In this paper, we study quantum algorithms for several variants of the k-xor problem. When quantum oracle access is allowed, we improve over previous results of Grassi et al. for almost all values of k. We define a set of "merging trees" which represent strategies for quantum and classical merging in k-xor algorithms, and prove that our method is optimal among these. We provide, for the first time, quantum speedups when the lists can be queried only classically.

We also extend our study to lists of limited size, up to the case where a single solution exists. We give quantum dissection algorithms that outperform the best known for many k, and apply to the multiple-encryption problem. Our complexities are confirmed by a Mixed Integer Linear Program that computes the best strategy for a given k-xor problem. All our algorithms apply when considering modular additions instead of bitwise XORs.
Expand
Jean-Claude Bajard, Julien Eynard, Paulo Martins, Leonel Sousa, Vincent Zucca
ePrint Report ePrint Report
State-of-the-art implementations of homomorphic encryption exploit the Fan and Vercauteren (FV) scheme and the Residue Number System (RNS). While the RNS breaks down large integer arithmetic into smaller independent channels, its non-positional nature makes operations such as division and rounding hard to implement, and makes the representation of small values inefficient. In this work, we propose the application of the Hybrid Position-Residues Number System representation to the FV scheme. This is a positional representation of large radix where the digits are represented in RNS. It inherits the benefits from RNS and allows to accelerate the critical division and rounding operations while also making the representation of smaller values more compact. This directly benefits the decryption and the homomorphic multiplication procedures, reducing their asymptotic complexity, in dimension $n$, from $\mathcal{O} (n^2 \log n)$ to $\mathcal{O} (n \log n)$ and from $\mathcal{O}(n^3 \log n)$ to $\mathcal{O} (n^{3})$, respectively. This has also resulted in noticeable speedups when experimentally compared to related art RNS implementations.
Expand
Michael Naehrig, Joost Renes
ePrint Report ePrint Report
The isogeny-based protocols SIDH and SIKE have received much attention for being post-quantum key agreement candidates that retain relatively small keys. A recent line of work has proposed and further improved compression of public keys, leading to the inclusion of public-key compression in the SIKE proposal for Round 2 of the NIST Post-Quantum Cryptography Standardization effort. We show how to employ the dual isogeny to significantly increase performance of compression techniques, reducing their overhead from 160--182% to 77--86% for Alice's key generation and from 98--104% to 59--61% for Bob's across different SIDH parameter sets. For SIKE, we reduce the overhead of (1) key generation from 140--153% to 61--74%, (2) key encapsulation from 67--90% to 38--57%, and (3) decapsulation from 59--65% to 34--39%. This is mostly achieved by speeding up the pairing computations, which has until now been the main bottleneck, but we also improve (deterministic) basis generation.
Expand
Ward Beullens, Thorsten Kleinjung, Frederik Vercauteren
ePrint Report ePrint Report
In this paper we report on a new record class group computation of an imaginary quadratic field having 154-digit discriminant, surpassing the previous record of 130 digits. This class group is central to the CSIDH-512 isogeny based cryptosystem, and knowing the class group structure and relation lattice implies efficient uniform sampling and a canonical representation of its elements. Both operations were impossible before and allow us to instantiate an isogeny based signature scheme first sketched by Stolbunov, which we further optimize using multiple public keys and Merkle trees. We also show that including quadratic twists allows to cut the public key size in half for free. Optimizing for signature size, our implementation takes 390ms to sign/verify and results in signatures of $263$ bytes, at the expense of a large public key. This is 300 times faster and over 3 times smaller than an optimized version of SeaSign for the same parameter set. Optimizing for public key and signature size combined, results in a total size of 1468 bytes, which is smaller than any other post-quantum signature scheme at the 128-bit security level.
Expand
Jiafan Wang, Sherman S. M. Chow
ePrint Report ePrint Report
Dynamic searchable symmetric encryption (DSSE) allows a client to search or update over an outsourced encrypted database. Range query is commonly needed (AsiaCrypt'18) but order-preserving encryption approach is vulnerable to reconstruction attacks (SP'17). Previous range-searchable schemes (SIGMOD'16, ESORICS'18) require an ad-hoc instance of encrypted database to store the updates and/or suffer from other shortcomings, some brought by the usage of asymmetric primitives.

In this paper, with our encrypted index which enables queries for a sequence of contiguous keywords, we propose a generic upgrade of any DSSE to support range query (a.k.a. range DSSE), and a concrete construction which provides a new trade-off of reducing the client storage to "reclaim" the benefits of outsourcing.

Our schemes achieve forward security, an important property which mitigates file injection attacks. We identify a variant of fi le injection attack against a recent solution (ESORICS'18). We also extend the definition of backward security to range DSSE and show our schemes are compatible with a generic transformation for achieving backward security (CCS'17).

We comprehensively analyze the computation and communication overheads including some parts which were ignored in previous schemes, e.g., index-related operations in the client side. Our experiments demonstrate the high efficiency of our schemes.
Expand
Christian Majenz, Christian Schaffner, Jeroen van Wier
ePrint Report ePrint Report
Non-malleability is an important security property for public-key encryption (PKE). Its significance is due to the fundamental unachievability of integrity and authenticity guarantees in this setting, rendering it the strongest integrity-like property achievable using only PKE, without digital signatures. In this work, we generalize this notion to the setting of quantum public-key encryption. Overcoming the notorious "recording barrier" known from generalizing other integrity-like security notions to quantum encryption, we generalize one of the equivalent classical definitions, comparison-based non-malleability, and show how it can be fulfilled. In addition, we explore one-time non-malleability notions for symmetric-key encryption from the literature by defining plaintext and ciphertext variants and by characterizing their relation.
Expand
Marc Joye
ePrint Report ePrint Report
Due to its shorter key size, elliptic curve cryptography (ECC) is gaining more and more popularity. However, if not properly implemented, the resulting cryptosystems may be susceptible to fault attacks. Over the past few years, several techniques for secure implementations have been published. This paper revisits the ring extension method and its adaptation to the elliptic curve setting.
Expand
ROME, ITALY, 22 June - 25 June 2020
Event Calendar Event Calendar
Event date: 22 June to 25 June 2020
Submission deadline: 9 September 2019
Notification: 20 January 2020
Expand
Haodong Jiang, Zhenfeng Zhang, Zhi Ma
ePrint Report ePrint Report
Key encapsulation mechanism (KEM) variants of the Fujisaki-Okamoto (FO) transformation (CRYPTO 1999 and Journal of Cryptology 2013) that turn a weakly-secure public-key encryption (PKE) into an IND-CCA-secure KEM, were proposed by Hofheinz, Hoevelmanns and Kiltz (TCC 2017) and widely used among the KEM submissions to the NIST Post-Quantum Cryptography Standardization Project. The security reductions for these variants in the quantum random oracle model (QROM) were given by Hofheinz, Hoevelmanns and Kiltz (TCC 2017) and Jiang et al. (Crypto 2018). However, under standard CPA security assumptions, i.e., OW-CPA and IND-CPA, all these security reductions are far from desirable due to the quadratic security loss.

In this paper, for KEM variants of the FO transformation, we show that a typical measurement-based reduction in the QROM from breaking standard OW-CPA (or IND-CPA) security of the underlying PKE to breaking the IND-CCA security of the resulting KEM, will inevitably incur a quadratic loss of the security, where ``measurement-based" means the reduction measures a hash query from the adversary and uses the measurement outcome to break the underlying security of PKE. In particular, all currently known security reductions in (TCC 2017 and Crypto 2018) are of this type, and our results suggest an explanation for the lack of progress in improving the reduction tightness in terms of the degree of security loss. We emphasize that our results do not expose any post-quantum security weakness of KEM variants of FO transformation.
Expand
Anamaria Costache, Kim Laine, Rachel Player
ePrint Report ePrint Report
The purpose of this paper is to provide a comprehensive analysis and side-by-side comparison of the noise growth behaviour in the BGV and FV somewhat homomorphic encryption schemes, both heuristically and in their implementations in the libraries HElib and SEAL, respectively. We run extensive experiments in HElib and SEAL to com- pare the heuristic noise growth to the noise growth in practice. From the experiments, we observe that for both schemes, the heuristic bounds are not tight. We attempt to improve the tightness of the bounds in a num- ber of ways, including the definition of new notions of noise, such as the invariant noise for BGV and the scaled inherent noise for FV. This does not significantly tighten the bounds, thus we conclude that the current heuristic bounds are the best possible in terms of a theoretical analysis. As an additional contribution, we update the comparison between the two schemes presented by Costache and Smart [22], and find that BGV has a slight advantage over FV. Thus, the conclusions of [22] still hold, although the differences between BGV and FV are less dramatic.
Expand
Daniel J. Bernstein, Andreas Hülsing
ePrint Report ePrint Report
There is a well-known gap between second-preimage resistance and preimage resistance for length-preserving hash functions. This paper introduces a simple concept that fills this gap. One consequence of this concept is that tight reductions can remove interactivity for multi-target length-preserving preimage problems, such as the problems that appear in analyzing hash-based signature systems. Previous reduction techniques applied to only a negligible fraction of all length-preserving hash functions, presumably excluding all off-the-shelf hash functions.
Expand
Eloi de Cherisey, Sylvain Guilley, Olivier Rioul, Pablo Piantanida
ePrint Report ePrint Report
Using information-theoretic tools, this paper establishes a mathematical link between the probability of success of a side-channel attack and the minimum number of queries to reach a given success rate, valid for any possible distinguishing rule and with the best possible knowledge on the attacker's side. This link is a lower bound on the number of queries highly depends on Shannon's mutual information between the traces and the secret key. This leads us to derive upper bounds on the mutual information that are as tight as possible and can be easily calculated. It turns out that, in the case of an additive white Gaussian noise, the bound on the probability of success of any attack is directly related to the signal to noise ratio. This leads to very easy computations and predictions of the success rate in any leakage model.
Expand
Ward Beullens
ePrint Report ePrint Report
This work presents 2 sigma protocols with helper to prove knowledge of:

-A solution to a system of quadratic polynomials

-A solution to an instance of the Permuted Kernel Problem

We then remove the helper from the protocol with a "cut-and-choose" protocol and we apply the Fiat-Shamir transform to obtain signature schemes with security proof in the QROM. We show that the resulting signature schemes, which we call the "MUltivarite quaDratic FIat-SHamir" scheme (MUDFISH) and the "ShUffled Solution to Homogeneous linear SYstem FIat-SHamir" scheme (SUSHSYFISH), are more efficient than existing signatures based on the MQ problem and the Permuted Kernel Problem. We also leverage the ZK-proof for PKP to improve the efficiency of Stern-like Zero Knowledge proofs for lattice statements.
Expand
◄ Previous Next ►