International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

03 May 2021

Itai Dinur
ePrint Report ePrint Report
At SODA 2017 Lokshtanov et al. presented the first worst-case algorithms with exponential speedup over exhaustive search for solving polynomial equation systems of degree $d$ in $n$ variables over finite fields. These algorithms were based on the polynomial method in circuit complexity which is a technique for proving circuit lower bounds that has recently been applied in algorithm design. Subsequent works further improved the asymptotic complexity of polynomial method-based algorithms for solving equations over the field $\mathbb{F}_2$. However, the asymptotic complexity formulas of these algorithms hide significant low-order terms, and hence they outperform exhaustive search only for very large values of $n$.

In this paper, we devise a concretely efficient polynomial method-based algorithm for solving multivariate equation systems over $\mathbb{F}_2$. We analyze our algorithm's performance for solving random equation systems, and bound its complexity by about $n^2 \cdot 2^{0.815n}$ bit operations for $d = 2$ and $n^2 \cdot 2^{\left(1 - 1/2.7d\right) n}$ for any $d \geq 2$.

We apply our algorithm in cryptanalysis of recently proposed instances of the Picnic signature scheme (an alternate third-round candidate in NIST's post-quantum standardization project) that are based on the security of the LowMC block cipher. Consequently, we show that 2 out of 3 new instances do not achieve their claimed security level. As a secondary application, we also improve the best-known preimage attacks on several round-reduced variants of the Keccak hash function.

Our algorithm combines various techniques used in previous polynomial method-based algorithms with new optimizations, some of which exploit randomness assumptions about the system of equations. In its cryptanalytic application to Picnic, we demonstrate how to further optimize the algorithm for solving structured equation systems that are constructed from specific cryptosystems.
Expand
Dionysis Zindros
ePrint Report ePrint Report
Macroeconomic policy in a blockchain system concerns the algorithm that decides the payment schedule for miners and thus its money mint rate. It governs the amounts, distributions, beneficiaries and conditions required for money supply payments to participants by the system. While most chains today employ simple policies such as a constant amount per block, several cryptocurrencies have sprung up that put forth more interesting policies. As blockchains become a more popular form of money, these policies inevitably are becoming more complex. A chain with a simple policy will often need to switch over to a different policy. Until now, it was believed that such upgrades require a hard fork -- after all, they are changing the money supply, a central part of the system, and unupgraded miners cannot validate blocks that deviate from those hard-coded rules. In this paper, we present a mechanism that allows a chain to upgrade from one policy to another through a soft fork. Our proposed mechanism works in today's Ethereum blockchain without any changes and can support a very generic class of monetary policies that satisfy a few basic bounds. Our construction is presented in the form of a smart contract. We showcase the usefulness of our proposal by describing several interesting applications of policy changes. Notably, we put forth a mechanism that makes Non-Interactive Proofs of Proof-of-Work unbribable, a previously open problem.
Expand
Surya Addanki, Kevin Garbe, Eli Jaffe, Rafail Ostrovsky, Antigoni Polychroniadou
ePrint Report ePrint Report
This paper introduces Prio+, a privacy-preserving system for the collection of aggregate statistics, with the same model and goals in mind as the original and highly influential Prio paper by Henry Corrigan-Gibbs and Dan Boneh (USENIX 2017). As in the original Prio, each client holds a private data value (e.g. number of visits to a particular website) and a small set of servers privately compute statistical functions over the set of client values (e.g. the average number of visits). To achieve security against faulty or malicious clients, Prio+ clients use Boolean secret-sharing instead of zero-knowledge proofs to convince servers that their data is of the correct form and Prio+ servers execute a share conversion protocols as needed in order to properly compute over client data. This allows us to ensure that clients’ data is properly formatted essentially for free, and the work shifts to novel share-conversion protocols between servers, where some care is needed to make it efficient. While our overall approach is a fairly simple observation in retrospect, it turns out that Prio+ strategy reduces the client’s computational burden by up to two orders of magnitude (or more depending on the statistic) while keeping servers costs comparable to Prio. Prio+ permits computation of exactly the same wide range of complex statistics as the original Prio protocol, including high-dimensional linear regression over private values held by clients. We report detailed benchmarks of our Prio+ implementation and compare these to both the original Go implementation of Prio and the Mozilla implementation of Prio. Our Prio+ software is open-source and released with the same license as Prio.
Expand
Zhenzhen Bao, Jian Guo, Danping Shi, Yi Tu
ePrint Report ePrint Report
Since the Meet-in-the-Middle preimage attack against 7-round AES hashing was found by Sasaki in 2011, the development of this research direction has never been stopped. In 2019, Bao et al. found the degree of freedom from the message (or the key of the underlying block cipher) were useful, before the Mixed-Integer-Linear-Programming (MILP) modeling was introduced to find the optimal attack configurations in 2020. In this paper, we move one step further in this research direction by introducing more techniques such as guess-and-determine, round independence, and symmetry etc. to the MILP search model. To demonstrate the power of the enhanced model, we apply it to the popular AES-like hash functions Whirlpool, Grøstl, and AES hashing modes, and obtain general improvements over the existing best (pseudo-)preimage attacks. In particular, the number of attacked rounds on Whirlpool and AES-256 hashing modes is extended from 6 to 7 and 9 to 10, respectively. Time complexity improvements are also obtained on variants of lesser rounds, as well as the 6-round Grøstl-256 and the 8-round Grøstl-512. Computer experiments on trial versions of the full attack procedure have confirmed the correctness of our results.
Expand
Yuyin Yu, Leo Perrin
ePrint Report ePrint Report
We found 5412 new quadartic APN on F28 with the QAM method, thus bringing the number of known CCZ-inequivalent APN functions on F28 to 26525. Unfortunately, none of these new functions are CCZ-equivalent to permutations. A (to the best of our knowledge) complete list of known quadratic APN functions, including our new ones, has been pushed to sboxU for ease of study by others.

In this paper, we recall how to construct new QAMs from a known one, and present how used the ortho-derivative method to figure out which of our new functions fall into different CCZ-classes. Based on these results and on others on smaller fields, we make to conjectures: that the full list of quadratic APN functions on F28 could be obtained using the QAM approached (provided enormous computing power), and that the total number of CCZ-inequivalent APN functions may overcome 50000.
Expand
Elena Andreeva, Rishiraj Bhattacharyya, Arnab Roy
ePrint Report ePrint Report
We revisit the classical problem of designing optimally efficient cryptographically secure hash functions. Hash functions are traditionally designed via applying modes of operation on primitives with smaller domains. The results of Shrimpton and Stam (ICALP 2008), Rogaway and Steinberger (CRYPTO 2008), and Mennink and Preneel (CRYPTO 2012) show how to achieve optimally efficient designs of $2n$-to-$n$-bit compression functions from non-compressing primitives with asymptotically optimal $2^{n/2-\epsilon}$-query collision resistance. Designing optimally efficient and secure hash functions for larger domains ($> 2n$ bits) is still an open problem.

To enable efficiency analysis and comparison across hash functions built from primitives of different domain sizes, in this work we propose the new \textit{compactness} efficiency notion. It allows us to focus on asymptotically optimally collision resistant hash function and normalize their parameters based on Stam's bound from CRYPTO 2008 to obtain maximal efficiency.

We then present two tree-based modes of operation as a design principle for compact, large domain, fixed-input-length hash functions. \begin{enumerate} \item Our first construction is an \underline{A}ugmented \underline{B}inary T\underline{r}ee (\cmt) mode. The design is a $(2^{\ell}+2^{\ell-1} -1)n$-to-$n$-bit hash function making a total of $(2^{\ell}-1)$ calls to $2n$-to-$n$-bit compression functions for any $\ell\geq 2$. Our construction is optimally compact with asymptotically (optimal) $2^{n/2-\epsilon}$-query collision resistance in the ideal model. For a tree of height $\ell$, in comparison with Merkle tree, the $\cmt$ mode processes additional $(2^{\ell-1}-1)$ data blocks making the same number of internal compression function calls. \item With our second design we focus our attention on the indifferentiability security notion. While the $\cmt$ mode achieves collision resistance, it fails to achieve indifferentiability from a random oracle within $2^{n/3}$ queries. $\cmt^{+}$ compresses only $1$ less data block than $\cmt$ with the same number of compression calls and achieves in addition indifferentiability up to $2^{n/2-\epsilon}$ queries. \end{enumerate} Both of our designs are closely related to the ubiquitous Merkle Trees and have the potential for real-world applicability where the speed of hashing is of primary interest.
Expand
Charanjit Singh Jutla, Nathan Manohar
ePrint Report ePrint Report
While it is well known that the sawtooth function has a point-wise convergent Fourier series, the rate of convergence is not the best possible for the application of approximating the mod function in small intervals around multiples of the modulus. We show a different sine series, such that the sine series of order n has error O(epsilon^(2n+1)) for approximating the mod function in epsilon-sized intervals around multiples of the modulus. Moreover, the resulting polynomial, after Taylor series approximation of the sine series, has small coefficients, and the whole polynomial can be computed at a precision that is only slightly larger than -(2n+1)log epsilon, the precision of approximation being sought. This polynomial can then be used to approximate the mod function to almost arbitrary precision, and hence allows practical CKKS-HE bootstrapping with arbitrary precision.
Expand
Thomas Attema, Nicole Gervasoni, Michiel Marcus, Gabriele Spini
ePrint Report ePrint Report
The advent of a full-scale quantum computer will severely impact most currently-used cryptographic systems. The most well-known aspect of this impact lies in the computational-hardness assumptions that underpin the security of most current public-key cryptographic systems: a quantum computer can factor integers and compute discrete logarithms in polynomial time, thereby breaking systems based on these problems. However, simply replacing these problems by other which are (believed to be) impervious even to a quantum computer does not completely solve the issue. Indeed, many security proofs of cryptographic systems are no longer valid in the presence of a quantum-capable attacker; while this does not automatically implies that the affected systems would be broken by a quantum computer, it does raises questions on the exact security guarantees that they can provide. This overview document aims to analyze all aspects of the impact of quantum computers on cryptographic, by providing an overview of current quantum-hard computational problems (and cryptographic systems based on them), and by presenting the security proofs that are affected by quantum-attackers, detailing what is the current status of research on the topic and what the expected effects on security are.
Expand
André Chailloux, Johanna Loyer
ePrint Report ePrint Report
Lattice-based cryptography is one of the leading proposals for post-quantum cryptography. The Shortest Vector Problem (SVP) is arguably the most important problem for the cryptanalysis of lattice-based cryptography, and many lattice-based schemes have security claims based on its hardness. The best quantum algorithm for the SVP is due to Laarhoven [Laa16 PhD] and runs in (heuristic) time $2^{0.2653d + o(d)}$. In this article, we present an improvement over Laarhoven's result and present an algorithm that has a (heuristic) running time of $2^{0.2570 d + o(d)}$ where $d$ is the lattice dimension. We also present time-memory trade-offs where we quantify the amount of quantum memory and quantum random access memory of our algorithm. The core idea is to replace Grover's algorithm used in [Laa16 PhD] in a key part of the sieving algorithm by a quantum random walk in which we add a layer of local sensitive filtering.
Expand
David Knichel, Amir Moradi, Nicolai Müller, Pascal Sasdrich
ePrint Report ePrint Report
Masking has been recognized as a sound and secure countermeasure for cryptographic implementations, protecting against physical side-channel attacks. Even though many different masking schemes have been presented over time, design and implementation of protected cryptographic Integrated Circuits (ICs) remains a challenging task. More specifically, correct and efficient implementation usually requires manual interactions accompanied by longstanding experience in hardware design and physical security. To this end, design and implementation of masked hardware often proves to be an error-prone task for engineers and practitioners. As a result, our novel tool for automated generation of masked hardware (AGEMA) allows even inexperienced engineers and hardware designers to create secure and efficient masked cryptograhic circuits originating from an unprotected design. More precisely, exploiting the concepts of Probe-Isolating Non-Interference (PINI) for secure composition of masked circuits, our tool provides various processing techniques to transform an unprotected design into a secure one, eventually accelerating and safeguarding the process of masking cryptographic hardware. Ultimately, we evaluate our tool in several case studies, emphasizing different trade-offs for the transformation techniques with respect to common performance metrics, such as latency, area, and randomness.
Expand
Gaurav Panwar, Roopa Vishwanathan, Satyajayant Misra
ePrint Report ePrint Report
In this paper, we study efficient and authorized rewriting of transactions already written to a blockchain. Mutable transactions will make a fraction of all blockchain transactions, but will be a necessity to meet the needs of privacy regulations, such as the General Data Protection Regulation (GDPR). The state-of-the-art rewriting approaches have several shortcomings, such as lack of user anonymity, inefficiency, and absence of revocation mechanisms. We present ReTRACe, an efficient framework for blockchain rewrites. ReTRACe is designed by composing a novel revocable chameleon hash with ephemeral trapdoor scheme, a novel revocable fast attribute based encryption scheme, and a dynamic group signature scheme. We discuss ReTRACe, and its constituent primitives in detail, along with their security analyses, and present experimental results to demonstrate the scalability of ReTRACe.
Expand
Jeonghyuk Lee, Jihye Kim, Hyunok Oh
ePrint Report ePrint Report
As a solution to mitigate the key exposure problems in the digital signature, forward security has been proposed. The forward security guarantees the integrity of the messages generated in the past despite leaks of a current time period secret key by evolving a secret key on each time period. However, there is no forward secure signature scheme whose all metrics have constant complexities. Furthermore, existing works do not support multi-user aggregation of signatures. In this paper, we propose a forward secure aggregate signature scheme utilizing recursive zk-SNARKs (zero knowledge Succinct Non-interactive ARguments of Knowledge), whose all metrics including size and time have $O(1)$. The proposed forward secure signature scheme can aggregate signatures generated by not only a single user but also multiple users. The security of the proposed scheme is formally proven under zero-knowledge assumption and random oracle model.
Expand
Cong Zhang, Hong-Sheng Zhou
ePrint Report ePrint Report
We investigate the digital signature schemes in the indifferentiability framework. We show that the well-known Lamport one-time signature scheme and the tree-based signature scheme can be ``lifted'' to realize ideal one-time signature, and ideal signature, respectively, without using computational assumptions. We for the first time show that the ideal signatures, ideal one-time signatures, and random oracles are equivalent in the framework of indifferentiability.
Expand
Cyprien Delpech de Saint Guilhem, Eleftheria Makri, Dragos Rotaru, Titouan Tanguy
ePrint Report ePrint Report
Secure multiparty generation of an RSA biprime is a challenging task, which increasingly receives attention, due to the numerous privacy-preserving applications that require it. In this work, we construct a new protocol for the RSA biprime generation task, secure against a malicious adversary, who can corrupt any subset of protocol participants. Our protocol is designed for generic MPC, making it both platform-independent and allowing for weaker security models to be assumed (e.g., honest majority), should the application scenario require it. By carefully ``postponing" the check of possible inconsistencies in the shares provided by malicious adversaries, we achieve noteworthy efficiency improvements. Concretely, we are able to produce additive sharings of the prime candidates, from multiplicative sharings via a semi-honest multiplication, without degrading the overall (active) security of our protocol. This is the core of our sieving technique, increasing the probability of our protocol sampling a biprime. Similarly, we perform the first biprimality test, requiring several repetitions, without checking input share consistency, and perform the more costly consistency check only in case of success of the Jacobi symbol based biprimality test. Moreover, we propose a protocol to convert an additive sharing over a ring, into an additive sharing over the integers. Besides being a necessary sub-protocol for the RSA biprime generation, this conversion protocol is of independent interest. The cost analysis of our protocol demonstrated that our approach improves the current state-of-the-art (Chen et al. -- Crypto 2020), in terms of communication efficiency. Concretely, for the two-party case with malicious security, and primes of 2048 bits, our protocol improves communication by a factor of ~37.
Expand
Vadim Lyubashevsky, Ngoc Khanh Nguyen, Gregor Seiler
ePrint Report ePrint Report
In a set membership proof, the public information consists of a set of elements and a commitment. The prover then produces a zero-knowledge proof showing that the commitment is indeed to some element from the set. This primitive is closely related to concepts like ring signatures and ``one-out-of-many'' proofs that underlie many anonymity and privacy protocols. The main result of this work is a new succinct lattice-based set membership proof whose size is logarithmic in the size of the set.

We also give a transformation of our set membership proof to a ring signature scheme. The ring signature size is also logarithmic in the size of the public key set and has size $\raapprox$~KB for a set of $2^5$ elements, and $\rdapprox$~KB for a set of size $2^{25}$. At an approximately $128$-bit security level, these outputs are between 1.5X and 7X smaller than the current state of the art succinct ring signatures of Beullens et al. (Asiacrypt 2020) and Esgin et al. (CCS 2019).

We then show that our ring signature, combined with a few other techniques and optimizations, can be turned into a fairly efficient Monero-like confidential transaction system based on the MatRiCT framework of Esgin et al. (CCS 2019). With our new techniques, we are able to reduce the transaction proof size by factors of about 4X - 10X over the aforementioned work. For example, a transaction with two inputs and two outputs, where each input is hidden among $2^{15}$ other accounts, requires approximately $30$KB in our protocol.
Expand
Mojtaba Bisheh-Niasar, Reza Azarderakhsh, Mehran Mozaffari-Kermani
ePrint Report ePrint Report
This paper demonstrates an architecture for accelerating the polynomial multiplication using number theoretic transform (NTT). Kyber is one of the finalists in the third round of the NIST post-quantum cryptography standardization process. Simultaneously, the performance of NTT execution is its main challenge, requiring large memory and complex memory access pattern. In this paper, an efficient NTT architecture is presented to improve the respective computation time. We propose several optimization strategies for efficiency improvement targeting different performance requirements for various applications. Our NTT architecture, including four butterfly cores, occupies only 798 LUTs and 715 FFs on a small Artix-7 FPGA, showing more than 44% improvement compared to the best previous work. We also implement a coprocessor architecture for Kyber KEM benefiting from our high-speed NTT core to accomplish three phases of the key exchange in 9, 12, and 19 \mus, respectively, operating at 200 MHz.
Expand
Wouter Castryck, Ann Dooms, Carlo Emerencia, Alexander Lemmens
ePrint Report ePrint Report
It follows from a result by Friedl, Ivanyos, Magniez, Santha and Sen from 2014 that, for any fixed integer $m > 0$ (thought of as being small), there exists a quantum algorithm for solving the hidden shift problem in an arbitrary finite abelian group $(G, +)$ with time complexity poly$( \log |G|) \cdot 2^{O(\sqrt{\log |mG|})}$. As discussed in the current paper, this can be viewed as a modest statement of Pohlig-Hellman type for hard homogeneous spaces. Our main contribution is a simpler algorithm achieving the same runtime for $m = 2^tp$, with $t$ any non-negative integer and $p$ any prime number, where additionally the memory requirements are mostly in terms of quantum random access classical memory; indeed, the amount of qubits that need to be stored is poly$( \log |G|)$. Our central tool is an extension of Peikert's adaptation of Kuperberg's collimation sieve to arbitrary finite abelian groups. This allows for a reduction, in said time, to the hidden shift problem in the quotient $G/2^tpG$, which can then be tackled in polynomial time, by combining methods by Friedl et al. for $p$-torsion groups and by Bonnetain and Naya-Plasencia for $2^t$-torsion groups.
Expand
Pakize Sanal, Emrah Karagoz, Hwajeong Seo, Reza Azarderakhsh, Mehran Mozaffari-Kermani
ePrint Report ePrint Report
Public-key cryptography based on the lattice problem is efficient and believed to be secure in a post-quantum era. In this paper, we introduce carefully optimized implementations of Kyber encryption schemes for 64-bit ARM Cortex-A processors. Our research contribution includes several optimizations for Number Theoretic Transform (NTT), noise sampling, and AES accelerator based symmetric function implementations. The proposed Kyber512 implementation on ARM64 improved previous works by 1.72×, 1.88×, and 2.29× for key generation, encapsulation, and decapsulation, respectively. Moreover, by using AES accelerator in the proposed Kyber512-90s implementation, it is improved by 8.57×, 6.94×, and 8.26× for key generation, encapsulation, and decapsulation, respectively. These results set new speed records for Kyber encryption on 64-bit ARM Cortex-A processors.
Expand
Nael Rahman, Vladimir Shpilrain
ePrint Report ePrint Report
We use matrices over bit strings as platforms for Diffie-Hellman-like public key exchange protocols. When multiplying matrices like that, we use Boolean OR operation on bit strings in place of addition and Boolean AND operation in place of multiplication. As a result, (1) computations with these matrices are very efficient; (2) standard methods of attacking Diffie-Hellman-like protocols are not applicable.
Expand
Andrés Fabrega, Ueli Maurer, Marta Mularczyk
ePrint Report ePrint Report
Updatable encryption (UE) is symmetric encryption which additionally supports key rotation. UE was introduced for scenarios where a user stores encrypted data on a cloud and, in order to mitigate secret key leakage, periodically sends a short update token, which the cloud uses to re-encrypt stored data to a fresh key. A long line of research resulted in a wide variety of security properties UE schemes can provide, including confidentiality, integrity protection, and hiding metadata. Unfortunately, given the complexity and nuances in the definitions, different properties are difficult to compare for non-experts, making it hard to judge which scheme provides the best security-efficiency trade-off for a given application.

In this work, we challenge the approach of defining UE as a primitive with a set of properties. As an alternative, we propose to treat UE as an interactive protocol, whose goal is to implement secure outsourced storage, using limited and imperfect resources (such as a small, leakable memory). To facilitate this approach, we introduce a framework that allows to easily formalize different security guarantees and available resources, making security-efficiency trade-offs of UE protocols easy to compare.

We believe that our approach opens the way for many constructions of secure storage that are not compatible with the currently defined syntax of UE. Indeed, we propose two new protocols: one for the setting with adversaries who control randomness (an attack vector so far not considered for UE), and one for the setting with adversaries that actively tamper with memory. Both protocols provide stronger confidentiality guarantees than all existing UE schemes.
Expand
◄ Previous Next ►