International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

07 March 2022

Ittai Abraham, Danny Dolev, Ittay Eyal, Joseph Y. Halpern
ePrint Report ePrint Report
Proof-of-work blockchain protocols rely on incentive compatibility. System participants, called miners, generate blocks that form a directed acyclic graph (blockdag). The protocols aim to compensate miners based on their mining power, that is, the fraction of computational resources they control. The protocol designates rewards, striving to make the prescribed protocol be the miners' best response. Nakamoto's Bitcoin protocol achieves this for miners controlling up to almost 1/4 of the total mining power, and the Ethereum protocol does about the same. The state of the art in increasing this bound is Fruitchain, which works with a bound of 1/2. Fruitchain guarantees that miners can increase their revenue by only a negligible amount if they deviate. It is thus an epsilon-Nash equilibrium, for a small epsilon. However, Fruitchain's mechanism allows a rational miner to deviate without penalty; we show that a simple practical deviation guarantees a miner a small increase in expected utility without any risk. This deviation results in a violation of the protocol desiderata. We conclude that, in our context, epsilon-Nash equilibrium is a rather fragile solution concept.

We propose a more robust approach that we call epsilon-sure Nash equilibrium, in which each miner's behavior is almost always a strict best response, and present Colordag, the first blockchain protocol that is an epsilon-sure Nash equilibrium for miners with less than 1/2 of the mining power. To achieve this, Colordag utilizes three techniques. First, it assigns blocks colors; rewards are assigned based on each color separately. By choosing sufficiently many colors, we can make sensitivity to network latency negligible. Second, Colordag penalizes forking---intentional bifurcation of the blockdag. Third, Colordag prevents miners from retroactively changing rewards. All this results in an epsilon-sure Nash equilibrium: Even when playing an extremely strong adversary with perfect knowledge of the future (specifically, when agents will generate blocks and when messages will arrive), correct behavior is a strict best response with high probability.
Expand
Olivier Blazy, Sayantan Mukherjee, Huyen Nguyen, Duong Hieu Phan, Damien Stehle
ePrint Report ePrint Report
Broadcast Encryption is a fundamental cryptographic primitive, that gives the ability to send a secure message to any chosen target set among registered users. In this work, we investigate broadcast encryption with anonymous revocation, in which ciphertexts do not reveal any information on which users have been revoked. We provide a scheme whose ciphertext size grows linearly with the number of revoked users. Moreover, our system also achieves traceability in the black-box confirmation model. Technically, our contribution is threefold. First, we develop a generic transformation of linear functional encryption toward trace-and-revoke systems for 1-bit message space. It is inspired from the transformation by Agrawal {et al} (CCS'17) with the novelty of achieving anonymity. Our second contribution is to instantiate the underlying linear functional encryptions from standard assumptions. We propose a $\mathsf{DDH}$-based construction which does no longer require discrete logarithm evaluation during the decryption and thus significantly improves the performance compared to the $\mathsf{DDH}$-based construction of Agrawal {et al}. In the LWE-based setting, we tried to instantiate our construction by relying on the scheme from Wang {et al} (PKC'19) only to find an attack on this scheme. Our third contribution is to extend the 1-bit encryption from the generic transformation to $n$-bit encryption. By introducing matrix multiplication functional encryption, which essentially performs a fixed number of parallel calls on functional encryptions with the same randomness, we can prove the security of the final scheme with a tight reduction that does not depend on $n$, in contrast to employing the hybrid argument.
Expand
Marina Krček, Thomas Ordas, Daniele Fronte, Stjepan Picek
ePrint Report ePrint Report
We consider finding as many faults as possible on the target device in the laser fault injection security evaluation. Since the search space is large, we require efficient search methods. Recently, an evolutionary approach using a memetic algorithm was proposed and shown to find more interesting parameter combinations than random search, which is commonly used. Unfortunately, once a variation on the bench or target is introduced, the process must be repeated to find suitable parameter combinations anew.

To negate the effect of variation, we propose a novel method combining memetic algorithm with machine learning approach called a decision tree. Our approach improves the memetic algorithm by using prior knowledge of the target introduced in the initial phase of the memetic algorithm. In our experiments, the decision tree rules enhance the performance of the memetic algorithm by finding more interesting faults on different samples of the same target. Our approach shows more than two orders of magnitude better performance than random search and up to 60% better performance than previous state-of-the-art results with a memetic algorithm. Another advantage of our approach is human-readable rules, allowing the first insights into the explainability of target characterization for laser fault injection.
Expand
Ben Smyth, Michael R. Clarkson
ePrint Report ePrint Report
We explore definitions of verifiability by Juels et al. (2010), Cortier et al. (2014), and Kiayias et al. (2015). We discover that voting systems vulnerable to attacks can be proven to satisfy each of those definitions and conclude they are unsuitable for the analysis of voting systems. Our results will fuel the exploration for a new definition.
Expand
Yu Long Chen, Avijit Dutta, Mridul Nandi
ePrint Report ePrint Report
At CRYPTO 2019, Chen et al. have shown a beyond the birthday bound secure $n$-bit to $n$-bit PRF based on public random permutations. Followed by the work, Dutta and Nandi have proposed a beyond the birthday bound secure nonce based MAC $\textsf{nEHtM}_p$ based on public random permutation. In particular, the authors have shown that $\textsf{nEHtM}_p$ achieves tight $2n/3$-bit security ({\em with respect to the state size of the permutation}) in the single-user setting, and their proven bound gracefully degrades with the repetition of the nonces. However, we have pointed out that their security proof is not complete (albeit it does not invalidate their security claim). In this paper, we propose a minor variant of $\textsf{nEHtM}_p$ construction, called $\textsf{nEHtM}^*_p$ and show that it achieves a tight $2n/3$ bit security in the multi-user setting. Moreover, the security bound of our construction also degrades gracefully with the repetition of nonces. Finally, we have instantiated our construction with the PolyHash function to realize a concrete beyond the birthday bound secure public permutation-based MAC, $\textsf{nEHtM}_p^+$ in the multi-user setting.
Expand
Nick Frymann, Daniel Gardham, Mark Manulis
ePrint Report ePrint Report
The W3C's WebAuthn standard employs digital signatures to offer phishing protection and unlinkability on the web using authenticators which manage keys on behalf of users. This introduces challenges when the account owner wants to delegate certain rights to a proxy user, such as to access their accounts or perform actions on their behalf, as delegation must not undermine the decentralisation, unlinkability, and attestation properties provided by WebAuthn.

We present two approaches, called remote and direct delegation of WebAuthn credentials, maintaining the standard's properties. Both approaches are compatible with Yubico's recent Asynchronous Remote Key Generation (ARKG) primitive proposed for backing up credentials. For remote delegation, the account owner stores delegation credentials at the relying party on behalf of proxies, whereas the direct variant uses a delegation-by-warrant approach, through which the proxy receives delegation credentials from the account owner and presents them later to the relying party. To realise direct delegation we introduce Proxy Signature with Unlinkable Warrants (PSUW), a new proxy signature scheme that extends WebAuthn's unlinkability property to proxy users and can be constructed generically from ARKG.

We discuss an implementation of both delegation approaches, designed to be compatible with WebAuthn, including extensions required for CTAP, and provide a software-based prototype demonstrating overall feasibility. On the performance side, we observe only a minor increase of a few milliseconds in the signing and verification times for delegated WebAuthn credentials based on ARKG and PSUW primitives. We also discuss additional functionality, such as revocation and permissions management, and mention usability considerations.
Expand
Sílvia Casacuberta, Julia Hesse, Anja Lehmann
ePrint Report ePrint Report
In recent years, oblivious pseudorandom functions (OPRFs) have become a ubiquitous primitive used in cryptographic protocols and privacy-preserving technologies. The growing interest in OPRFs, both theoretical and applied, has produced a vast number of different constructions and functionality variations. In this paper, we provide a systematic overview of how to build and use OPRFs. We first categorize existing OPRFs into essentially four families based on their underlying PRF (Naor-Reingold, Dodis-Yampolskiy, Hashed Diffie-Hellman, and generic constructions). This categorization allows us to give a unified presentation of all oblivious evaluation methods in the literature, and to understand which properties OPRFs can (or cannot) have. We further demonstrate the theoretical and practical power of OPRFs by visualizing them in the landscape of cryptographic primitives, and by providing a comprehensive overview of how OPRFs are leveraged for improving the privacy of internet users. Our work systematizes 15 years of research on OPRFs and provides inspiration for new OPRF constructions and applications thereof.
Expand
Jakub Breier, Xiaolu Hou
ePrint Report ePrint Report
Fault injection attacks (FIA) are a class of active physical attacks, mostly used for malicious purposes such as extraction of cryptographic keys, privilege escalation, attacks on neural network implementations. There are many techniques that can be used to cause the faults in integrated circuits, many of them coming from the area of failure analysis. In this paper we tackle the topic of practicality of FIA. We analyze the most commonly used techniques that can be found in the literature, such as voltage/clock glitching, electromagnetic pulses, lasers, and Rowhammer attacks. To summarize, FIA can be mounted on most commonly used architectures from ARM, Intel, AMD, by utilizing injection devices that are often below the thousand dollar mark. Therefore, we believe these attacks can be considered practical in many scenarios, especially when the attacker can physically access the target device.
Expand
Irem Keskinkurt Paksoy, Murat Cenk
ePrint Report ePrint Report
The Number Theoretic Transform (NTT), Toom-Cook, and Karatsuba are the most commonly used algorithms for implementing lattice-based ?nalists of the NIST PQC competition. In this paper, we propose Toeplitz matrix-vector product (TMVP) based algorithms for multiplication for all parameter sets of NTRU. We implement the pro- posed algorithms on ARM Cortex-M4. The results show that TMVP- based multiplication algorithms using the four-way TMVP formula are more e?cient for NTRU. Our algorithms outperform the Toom-Cook method by up to 25.3%, and the NTT method by up to 19.8%. More- over, our algorithms require less stack space than the others in most cases. We also observe the impact of these improvements on the overall performance of NTRU. We speed up the encryption, decryption, en- capsulation, and decapsulation by up to 13.7%,17.5%,3.5%, and 14.1%, respectively, compared to state-of-the-art implementation.
Expand
Yanhong Fan,Muzhou Li,Chao Niu,Zhenyu Lu,Meiqin Wang
ePrint Report ePrint Report
SKINNY-AEAD is one of the second-round candidates of the Lightweight Cryptography Standardization project held by NIST. SKINNY-AEAD M1 is the primary member of six SKINNY-AEAD schemes, while SKINNY-AEAD M3 is another member with a small tag. In the design document, only security analyses of their underlying primitive SKINNY-128-384 are provided. Besides, there are no valid third-party analyses on SKINNY-AEAD M1/M3 according to our knowledge. Therefore, this paper focuses on constructing the first third-party security analyses on them under a nonce-respecting scenario. By taking the encryption mode of SKINNY-AEAD into consideration and exploiting several properties of SKINNY, we can deduce some necessary constraints on the input and tweakey differences of related-tweakey impossible differential distinguishers. Under these constraints, we can find distinguishers suitable for mounting powerful tweakey recovery attacks. With the help of the automatic searching algorithms based on STP, we find some 14-round distinguishers. Based on one of these distinguishers, we mount a 20-round and an 18-round tweakey recovery attack on SKINNY-AEAD M1/M3. To the best of our knowledge, all these attacks are the best ones so far.
Expand
Nir Bitansky, Zvika Brakerski, Yael Tauman Kalai
ePrint Report ePrint Report
Is it possible to convert classical reductions into post-quantum ones? It is customary to argue that while this is problematic in the interactive setting, non-interactive reductions do carry over. However, when considering quantum auxiliary input, this conversion results in a non-constructive post-quantum reduction that requires duplicating the quantum auxiliary input, which is in general inefficient or even impossible. This violates the win-win premise of provable cryptography: an attack against a cryptographic primitive should lead to an algorithmic advantage.

We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the post-quantum setting in a constructive manner. We show that any non-interactive non-adaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made post-quantum constructive. In contrast, assumptions with super-polynomial solution space (such as general search assumptions) cannot be generally converted.

Along the way, we make several additional contributions:

1. We put forth a framework for reductions (or general interaction) with stateful solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting. 2. A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a super-polynomial solution space cannot be generically ``restored'' post-measurement. This shows that the novel rewinding technique of Chiesa et al. (FOCS 2021) is tight in the sense that it cannot be extended beyond a polynomial measurement space.
Expand
Yi Deng, Shunli Ma, Xinxuan Zhang, Hailong Wang, Xuyang Song, Xiang Xie
ePrint Report ePrint Report
Threshold Signatures allow $n$ parties to share the ability of issuing digital signatures so that any coalition of size at least $t+1$ can sign, whereas groups of $t$ or fewer players cannot. The currently known class-group-based threshold ECDSA constructions are either inefficient (requiring parallel-repetition of the underlying zero knowledge proof with small challenge space) or requiring rather non-standard low order assumption. In this paper, we present efficient threshold ECDSA protocols from encryption schemes based on class groups with neither assuming the low order assumption nor parallel repeating the underlying zero knowledge proof, yielding a significant efficiency improvement in the key generation over previous constructions.

Along the way we introduce a new notion of promise $\Sigma$-protocol that satisfies only a weaker soundness called promise extractability. An accepting promise $\Sigma$-proof for statements related to class-group-based encryptions does not establish the truth of the statement but provides security guarantees (promise extractability) that are sufficient for our applications. We also show how to simulate homomorphic operations on a (possibly invalid) class-group-based encryption whose correctness has been proven via our promise $\Sigma$-protocol. We believe that these techniques are of independent interest and applicable to other scenarios where efficient zero knowledge proofs for statements related to class-group is required.
Expand
Vasyl Ustimenko
ePrint Report ePrint Report
New explicit constructions of infinite families of finite small world graphs of large girth with well defined projective limits which is an infinite tree are described. The applications of these objects to constructions of LDPC codes and cryptographic algorithms are shortly observed. We define families of homogeneous algebraic graphs of large girth over commutative ring K. For each commutative integrity ring K with |K|>2 we introduce a family of bipartite homogeneous algebraic graphs of large girth over K formed by graphs with sets of points and lines isomorphic K^n, n>1 and cycle indicator ≥ 2n+2 such that their projective limit is well defined and isomorphic to an infinite forest.
Expand
Alexander Poremba
ePrint Report ePrint Report
Quantum information has the property that measurement is an inherently destructive process. This feature is most apparent in the principle of complementarity, which states that mutually incompatible observables cannot be measured at the same time. Recent work by Broadbent and Islam (TCC 2020) builds on this aspect of quantum mechanics to realize a cryptographic notion called certified deletion. While this remarkable notion enables a classical verifier to be convinced that a (private-key) quantum ciphertext has been deleted by an untrusted party, it offers no additional layer of functionality.

In this work, we augment the proof-of-deletion paradigm with fully homomorphic encryption (FHE). This results in a new and powerful cryptographic notion called fully homomorphic encryption with certified deletion -- an interactive protocol which enables an untrusted quantum server to compute on encrypted data and, if requested, to simultaneously prove data deletion to a client. Our main technical ingredient is an interactive protocol by which a quantum prover can convince a classical verifier that a sample from the Learning with Errors (LWE) distribution in the form of a quantum state was deleted. We introduce an encoding based on Gaussian coset states which is highly generic and suggests that essentially any LWE-based cryptographic primitive admits a classically-verifiable quantum proof of deletion.

As an application of our protocol, we construct a Dual-Regev public-key encryption scheme with certified deletion, which we then extend towards a (leveled) FHE scheme of the same type. In terms of security, we distinguish between two types of attack scenarios: a semi-honest adversary that follows the protocol exactly, and a fully malicious adversary that is allowed to deviate arbitrarily from the protocol. In the former case, we achieve indistinguishable ciphertexts, even if the secret key is later revealed after deletion has taken place. In the latter case, we provide entropic uncertainty relations for Gaussian cosets which limit the adversary's ability to guess the delegated ciphertext once deletion has taken place. Our results enable a form of everlasting cryptography and give rise to new privacy-preserving quantum cloud applications, such as private machine learning on encrypted data with certified data deletion.
Expand
Saikrishna Badrinarayanan, Ranjit Kumaresan, Mihai Christodorescu, Vinjith Nagaraja, Karan Patel, Srinivasan Raghuraman, Peter Rindal, Wei Sun, Minghua Xu
ePrint Report ePrint Report
Motivated by the recent advances in practical secure computation, we design and implement a framework for scaling solutions for the problem of private set intersection (PSI) into the realm of big data. A protocol for PSI enables two parties each holding a set of elements to jointly compute the intersection of these sets without revealing the elements that are not in the intersection. Following a long line of research, recent protocols for PSI only have $\approx 5\times$ computation and communication overhead over an insecure set intersection. However, this performance is typically demonstrated for set sizes in the order of ten million. In this work, we aim to scale these protocols to efficiently handle set sizes of one billion elements or more.

We achieve this via a careful application of a binning approach that enables parallelizing any arbitrary PSI protocol. Building on this idea, we designed and implemented a framework that takes a pair of PSI executables (i.e., for each of the two parties) that typically works for million-sized sets, and then scales it to billion-sized sets (and beyond). For example, our framework can perform a join of billion-sized sets in 83 minutes compared to 2000 minutes of Pinkas et al. (ACM TPS 2018), an improvement of $25\times$. Furthermore, we present an end-to-end Spark application where two enterprises, each possessing private databases, can perform a restricted class of database join operations (specifically, join operations with only an on clause which is a conjunction of equality checks involving attributes from both parties, followed by a where clause which can be split into conjunctive clauses where each conjunction is a function of a single table) without revealing any data that is not part of the output.
Expand
Ivan Damgård, Divya Ravi, Luisa Siniscalchi, Sophia Yakoubov
ePrint Report ePrint Report
In this paper we consider two-round secure computation protocols which use different communication channels in different rounds: namely, protocols where broadcast is available in neither round, both rounds, only the first round, or only the second round. The prior works of Cohen, Garay and Zikas (Eurocrypt 2020) and Damgård, Magri, Ravi, Siniscalchi and Yakoubov (Crypto 2021) give tight characterizations of which security guarantees are achievable for various thresholds in each of the communication structures.

In this paper, we determine what is possible in the honest majority setting without a PKI, closing a question left open by Damgård et al. We show that without a PKI, having an honest majority does not make it possible to achieve stronger security guarantees compared to the dishonest majority setting. However, if two thirds of the parties are guaranteed to be honest, identifiable abort is additionally achievable using broadcast only in the second round.

We use fundamentally different techniques from the previous works in order to avoid relying on private communication in the first round when a PKI is not available, since assuming such private channels without the availability of public encryption keys is unrealistic. We also show that, somewhat surprisingly, the availability of private channels in the first round does not enable stronger security guarantees unless the corruption threshold is one. In that case, prior work has shown that with private channels in the first round, guaranteed output delivery is always achievable; we show that without these channels, fairness is unachievable even with broadcast in both rounds, and unanimous abort is unachievable without broadcast in the second round.
Expand
Michael Amar, Amit Kama, Kang Wang, Yossi Oren
ePrint Report ePrint Report
The cloud-based Internet of Things (IoT) creates opportunities for more direct integration of the physical world and computer-based systems, allowing advanced applications based on sensing, analyzing and controlling the physical world. IoT deployments, however, are at a particular risk of counterfeiting, through which an adversary can corrupt the entire ecosystem. Therefore, entity authentication of edge devices is considered an essential part of the security of IoT systems.

A recent paper of Farha et al. suggested an entity authentication scheme suitable for low-resource IoT edge devices, which relies on SRAM-based physically unclonable functions (PUFs). In this paper we analyze this scheme. We show that, while it claims to offer strong PUF functionality, the scheme creates only a weak PUF: an active attacker can completely read out the secret PUF response of the edge device after a very small amount of queries, converting the scheme into a weak PUF scheme which can then be counterfeited easily. After analyzing the scheme, we propose an alternative construction for an authentication method based on SRAM-PUF which better protects the secret SRAM startup state.
Expand
Vadim Tsypyschev, Iliya Morgasov
ePrint Report ePrint Report
In this article it is investigated security of the cipher feedback mode of operation with regular external serial re-keying aiming to construct lightweight pseudo-random sequences generator. For this purpose it was introduced new mode of operation called Multi-key CFB, MCFB, and was obtained the estimations of provable security of this new mode in the LOR-CPA model. Besides that. it was obtained counterexample to well-known result of AbdallaBellare about security of encryption scheme with external re-keying.
Expand
Anna Lysyanskaya, Leah Namisa Rosenbloom
ePrint Report ePrint Report
Numerous cryptographic applications require efficient non-interactive zero-knowledge proofs of knowledge (NIZK PoK) as a building block. Typically they rely on the Fiat-Shamir heuristic to do so, as security in the random-oracle model is considered good enough in practice. However, there is a troubling disconnect between the stand-alone security of such a protocol and its security as part of a larger, more complex system where several protocols may be running at the same time. Provable security in the universal composition (UC) model of Canetti is the best guarantee that nothing will go wrong when a system is part of a larger whole. In this paper, we show how to achieve efficient UC-secure NIZK PoK in the global random-oracle model of Canetti, Jain, and Scafuro.
Expand
Joachim Neu, Ertem Nusret Tas, David Tse
ePrint Report ePrint Report
We present two attacks targeting the Proof-of-Stake (PoS) Ethereum consensus protocol. The first attack suggests a fundamental conceptual incompatibility between PoS and the Greedy Heaviest-Observed Sub-Tree (GHOST) fork choice paradigm employed by PoS Ethereum. In a nutshell, PoS allows an adversary with a vanishing amount of stake to produce an unlimited number of equivocating blocks. While most equivocating blocks will be orphaned, such orphaned `uncle blocks' still influence fork choice under the GHOST paradigm, bestowing upon the adversary devastating control over the canonical chain. While the Latest Message Driven (LMD) aspect of current PoS Ethereum prevents a straightforward application of this attack, our second attack shows how LMD specifically can be exploited to obtain a new variant of the balancing attack that overcomes a recent protocol addition that was intended to mitigate balancing-type attacks. Thus, in its current form, PoS Ethereum without and with LMD is vulnerable to our first and second attack, respectively.
Expand
◄ Previous Next ►