International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

16 February 2024

Dung Bui, Eliana Carozza, Geoffroy Couteau, Dahmun Goudarzi, Antoine Joux
ePrint Report ePrint Report
We revisit the construction of signature scheme using the MPC-in-the-head paradigm, and focus in particular on constructions from the regular syndrome decoding assumption, a well-known variant of the syndrome decoding assumption. We obtain two main contributions:

– We observe that previous signatures in the MPC-in-the-head paradigm must rely on a salted version of the GGM puncturable pseudorandom function (PPRF) to avoid collision attacks. We design a new efficient PPRF construction provably secure in the multi-instance setting. The security analysis of our PPRF, in the ideal cipher model, is quite involved and forms a core technical contribution of our work. While previous constructions had to rely on a hash function, our construction uses only a fixed-key block cipher and is considerably more efficient as a result. Our improved PPRF can be used to speed up many MPC-in-the-head signatures, and illustrate it on two signatures: the recent SDitH (submitted to the NIST), and a new signature scheme that we introduce.

– We introduce a new signature scheme from the regular syndrome decoding assumption, based on a new protocol for the MPC-in-the-head paradigm, which significantly reduces communication compared to previous works. Our scheme is conceptually simple, though its security analysis requires a delicate and nontrivial combinatorial analysis.
Expand
Diana Ghinea, Chen-Da Liu-Zhang, Roger Wattenhofer
ePrint Report ePrint Report
Byzantine Agreement (BA) allows a set of $n$ parties to agree on a value even when up to $t$ of the parties involved are corrupted. While previous works have shown that, for $\ell$-bit inputs, BA can be achieved with the optimal communication complexity $\mathcal{O}(\ell n)$ for sufficiently large $\ell$, BA only ensures that honest parties agree on a meaningful output when they hold the same input, rendering the primitive inadequate for many real-world applications.

This gave rise to the notion of Convex Agreement (CA), introduced by Vaidya and Garg [PODC'13], which requires the honest parties' outputs to be in the convex hull of the honest inputs. Unfortunately, all existing CA protocols incur a communication complexity of at least $\Omega(\ell n^2)$. In this work, we introduce the first CA protocol with the optimal communication of $\mathcal{O}(\ell n)$ bits for inputs in $\mathbb{Z}$ of size $\ell = \Omega(\kappa \cdot n^2 \log n)$, where $\kappa$ is the security parameter.
Expand
Katharina Koschatko, Reinhard Lüftenegger, Christian Rechberger
ePrint Report ePrint Report
Gröbner basis cryptanalysis of hash functions and ciphers, and their underlying permutations, has seen renewed interest recently. Anemoi (Crypto'23) is a permutation-based hash function that is arithmetization-friendly, i.e., efficient for a variety of arithmetizations used in zero-knowledge proofs. In this paper, exploring both theoretical bounds as well as experimental validation, we present new complexity estimates for Gröbner basis attacks on the Anemoi permutation over prime fields. We cast our findings in what we call the six worlds of Gröbner basis cryptanalysis. As an example, keeping the same security arguments of the design, we conclude that at least $23 /45$ instead of $17 / 33$ rounds would need to be used for $128 / 256$-bit security before adding a security margin.
Expand
Nir Bitansky, Sapir Freizeit
ePrint Report ePrint Report
Additive randomized encodings (ARE), introduced by Halevi, Ishai, Kushilevitz, and Rabin (CRYPTO 2023), reduce the computation of a k-party function $f (x_1, . . . , x_k )$ to locally computing encodings $\hat{x}_i$ of each input xi and then adding them together over some Abelian group into an output encoding $\hat{y} = ∑ \hat{x}_i$, which reveals nothing but the result. In robust ARE (RARE) the sum of any subset of $\hat{x}_i$, reveals only the residual function obtained by restricting the corresponding inputs. The appeal of (R)ARE comes from the simplicity of the online part of the computation involving only addition, which yields for instance non-interactive multi-party computation in the shuffle model where messages from different parties are anonymously shuffled. Halevi, Ishai, Kushilevitz, and Rabin constructed ARE from standard assumptions and RARE in the ideal obfuscation model, leaving open the question of whether RARE can be constructed in the plain model. We construct RARE in the plain model from indistinguishability obfuscation, which is necessary, and a new primitive that we call pseudo-non-linear codes. We provide two constructions of this primitive assuming either Learning with Errors or Decision Diffie Hellman. A bonus feature of our construction is that it is online succinct. Specifically, encodings $\hat{x}_i$ can be decomposed to offline parts $\hat{z}_i$ that can be sent directly to the evaluator and short online parts $\hat{g}_i$ that are added together.
Expand
Mathias Hall-Andersen, Mark Simkin, Benedikt Wagner
ePrint Report ePrint Report
As blockchains like Ethereum continue to grow, clients with limited resources can no longer store the entire chain. Light nodes that want to use the blockchain, without verifying that it is in a good state overall, can just download the block headers without the corresponding block contents. As those light nodes may eventually need some of the block contents, they would like to ensure that they are in principle available.

Data availability sampling, introduced by Bassam et al., is a process that allows light nodes to check the availability of data without download it. In a recent effort, Hall-Andersen, Simkin, and Wagner have introduced formal definitions and analyzed several constructions. While their work thoroughly lays the formal foundations for data availability sampling, the constructions are either prohibitively expensive, use a trusted setup, or have a download complexity for light clients scales with a square root of the data size.

In this work, we make a significant step forward by proposing an efficient data availability sampling scheme without a trusted setup and only polylogarithmic overhead. To this end, we find a novel connection with interactive oracle proofs of proximity (IOPPs). Specifically, we prove that any IOPP meeting an additional consistency criterion can be turned into an erasure code commitment, and then, leveraging a compiler due to Hall-Andersen, Simkin, and Wagner, into a data availability sampling scheme. This new connection enables data availability to benefit from future results on IOPPs. We then show that the widely used FRI IOPP satisfies our consistency criterion and demonstrate that the resulting data availability sampling scheme outperforms the state-of-the-art asymptotically and concretely in multiple parameters.
Expand
Simon Tollec, Vedad Hadžić, Pascal Nasahl, Mihail Asavoae, Roderick Bloem, Damien Couroussé, Karine Heydemann, Mathieu Jan, Stefan Mangard
ePrint Report ePrint Report
To assess the robustness of CPU-based systems against fault injection attacks, it is necessary to analyze the consequences of the fault propagation resulting from the intricate interaction between the software and the processor. However, current formal methodologies that combine both hardware and software aspects experience scalability issues, primarily due to the use of bounded verification techniques. This work formalizes the notion of $k$-fault resistant partitioning as an inductive solution to this fault propagation problem when assessing redundancy-based hardware countermeasures to fault injections. Proven security guarantees can then reduce the remaining hardware attack surface to consider in a combined analysis with the software, enabling a full co-verification methodology. As a result, we formally verify the robustness of the hardware lockstep countermeasure of the OpenTitan secure element to single bit-flip injections. Besides that, we demonstrate that previously intractable problems, such as analyzing the robustness of OpenTitan running a secure boot process, can now be solved by a co-verification methodology that leverages a $k$-fault resistant partitioning. We also report a potential exploitation of the register file vulnerability in two other software use cases. Finally, we provide a security fix for the register file, verify its robustness, and integrate it into the OpenTitan project.
Expand
Adam Blatchley Hansen, Jesper Buus Nielsen, Mark Simkin
ePrint Report ePrint Report
We study blockchain-based provably anonymous payment systems between light clients. Such clients interact with the blockchain through full nodes, who can see what the light clients read and write. The goal of our work is to enable light clients to perform anonymous payments, while maintaining privacy even against the full nodes through which they interact with the blockchain.

We formalize the problem in the universal composability model and present a provably secure solution to it. In comparison to existing works, we are the first ones that simultaneously provide strong anonymity guarantees, provable security, and anonymity with respect to the full nodes. Along the way, we make several contributions that may be of independent interest.

We define and construct efficient compressible randomness beacons, which produce unpredictable values in regular intervals and allow for storing all published values in a short digest. We define and construct anonymous-coin friendly encryption schemes and we show how they can be used within anonymous payment systems. We define and construct strongly oblivious read-once map, which can be seen as a special data structure that needs to satisfy a stronger notion of obliviousness than what is usually considered. We present a new approach, which is compatible with light clients, for mitigating double-spending attacks in anonymous cryptocurrencies.
Expand
Xiaoyu Ji, Junru Li, Yifan Song
ePrint Report ePrint Report
Secure multiparty computation (MPC) allows a set of $n$ parties to jointly compute a function on their private inputs. In this work, we focus on the information-theoretic MPC in the \emph{asynchronous network} setting with optimal resilience ($t
Our work closes this gap by presenting the first ACSS protocol that achieves $O(n\kappa)$ bits per sharing. When combined with the compiler from ACSS to AMPC by Choudhury and Patra [IEEE Trans. Inf. Theory '17], we obtain an AMPC with $O(n^2\kappa)$ bits per multiplication gate, improving the previously best-known result by a factor of $n^2$. Moreover, with a concurrent work that improves the compiler by Choudhury and Patra by a factor of $n$, we obtain the first AMPC with $O(n\kappa)$ bits per multiplication gate.
Expand
Alessandro Budroni, Jesús-Javier Chi-Domínguez, Giuseppe D'Alconzo, Antonio J. Di Scala, Mukul Kulkarni
ePrint Report ePrint Report
The Linear Code Equivalence (LCE) Problem has received increased attention in recent years due to its applicability in constructing efficient digital signatures. Notably, the LESS signature scheme based on LCE is under consideration for the NIST post-quantum standardization process, along with the MEDS signature scheme that relies on an extension of LCE to the rank metric, namely Matrix Code Equivalence (MCE) Problem. Building upon these developments, a family of signatures with additional properties, including linkable ring, group, and threshold signatures, has been proposed. These novel constructions introduce relaxed versions of LCE (and MCE), wherein multiple samples share the same secret equivalence. Despite their significance, these variations have often lacked a thorough security analysis, being assumed to be as challenging as their original counterparts. Addressing this gap, our work delves into the sample complexity of LCE and MCE --- precisely, the sufficient number of samples required for efficient recovery of the shared secret equivalence. Our findings reveal, for instance, that one shouldn't use the same secret twice in the LCE setting since this enables a polynomial time (and memory) algorithm to retrieve the secret. Consequently, our results unveil the insecurity of two advanced signatures based on variants of the LCE Problem.
Expand
Vipul Goyal, Chen-Da Liu-Zhang, Yifan Song
ePrint Report ePrint Report
Secure multi-party computation (MPC) allows a set of $n$ parties to jointly compute a function over their private inputs. The seminal works of Ben-Or, Canetti and Goldreich [STOC '93] and Ben-Or, Kelmer and Rabin [PODC '94] settled the feasibility of MPC over asynchronous networks. Despite the significant line of work devoted to improving the communication complexity, current protocols with information-theoretic security and optimal resilience $t
In this work we make progress towards closing this gap. We provide a novel MPC protocol that makes black-box use of an asynchronous complete secret-sharing (ACSS) protocol, where the cost per multiplication reduces to the cost of distributing a constant number of sharings via ACSS, improving a linear factor over the state of the art by Choudhury and Patra [IEEE Trans. Inf. Theory '17].

Instantiating ACSS with the protocol by Choudhury and Patra [J. Crypto '23] we achieve an MPC protocol with $\mathcal{O}(n^3C)$ communication. Moreover, with a recent concurrent work achieving ACSS with linear cost per sharing, we achieve an MPC with $\mathcal{O}(nC)$ communication.
Expand
Yifan Song, Xiaxi Ye
ePrint Report ePrint Report
In this work, we study the communication complexity of perfectly secure MPC protocol with guaranteed output delivery against $t=(n-1)/3$ corruptions. The previously best-known result in this setting is due to Goyal, Liu, and Song (CRYPTO, 2019) which achieves $O(n)$ communication per gate, where $n$ is the number of parties.

On the other hand, in the honest majority setting, a recent trend in designing efficient MPC protocol is to rely on packed Shamir sharings to speed up the online phase. In particular, the work by Escudero et al. (CCS 2022) gives the first semi-honest protocol that achieves a constant communication overhead per gate across all parties in the online phase while maintaining overall $O(n)$ communication per gate. We thus ask the following question: ``Is it possible to construct a perfectly secure MPC protocol with GOD such that the online communication per gate is $O(1)$ while maintaining overall $O(n)$ communication per gate?''

In this work, we give an affirmative answer by providing an MPC protocol with communication complexity $O(|C|+\mathsf{Depth}\cdot n+n^5)$ elements for the online phase, and $O(|C|\cdot n+\mathsf{Depth}\cdot n^2 + n^4)$ elements for the preprocessing phase, where $|C|$ is the circuit size and $\mathsf{Depth}$ is the circuit depth.
Expand
Kaisei Kajita, Go Ohtake, Tsuyoshi Takagi
ePrint Report ePrint Report
Adaptor signatures have attracted attention as a tool to ad-dress scalability and interoperability issues in blockchain applications, for example, such as atomic swaps for exchanging di˙erent cryptocur-rencies. Adaptor signatures can be constructed by extending of common digital signature schemes that both authenticate a message and disclose a secret witness to a speci˝c party. In Asiacrypt 2021, Aumayr et al. formulated the two-party adaptor signature as an independent crypto-graphic primitive. In this study, we extend the their adaptor signature scheme formulation to N party adaptor signature scheme, present its generic construction, and de˝ne the security to be satis˝ed. Next, we present a concrete construction based on Schnorr signatures and discuss the security properties.
Expand
Nobuyuki Sugio
ePrint Report ePrint Report
Large language models (LLMs), exemplified by the advanced AI tool ChatGPT in 2023, have demonstrated remarkable capabilities in generating sentences, images, and program codes, driven by their development from extensive datasets. With over 100 million users worldwide, ChatGPT stands out as a leader among LLMs. Previous studies have shown its proficiency in generating program source codes for the symmetric-key block ciphers AES, CHAM, and ASCON. This study ventures into the implementation of cryptanalytic program source codes for a lightweight block cipher using ChatGPT, exploring its potential and limitations in the field of cryptography.
Expand
Hiroki Okada, Tsuyoshi Takagi
ePrint Report ePrint Report
Threshold public key encryption (ThPKE) is PKE that can be decrypted by collecting "partial decryptions" from t (≤ N) out of N parties. ThPKE based on the learning with errors problem (LWE) is particularly important because it can be extended to threshold fully homomorphic encryption (ThFHE). ThPKE and ThFHE are fundamental tools for constructing multiparty computation (MPC) protocols: In 2023, NIST initiated a project (NIST IR 8214C) to establish guidelines for implementing threshold cryptosystems. Because MPC often requires simulation-security (SS), ThPKE schemes that satisfy SS (SS-ThPKE) are also important. Recently, Micciancio and Suhl (ePrint 2023/1728) presented an efficient SS-ThPKE scheme based on LWE with a polynomial modulus. However, the scheme requires to use a nonstandard problem called “known-norm LWE” for the security proof because the norm ∥e∥ of the error of the public key is leaked from the partial decryptions. This leads to the following two challenges: 1) The construction based on LWE incurs a security loss of approximately 13 bits for 128-bit security. 2) No construction based on (standard) Ring-LWE has been presented. In this paper, we address both of these challenges: we propose an efficient SS-ThPKE scheme whose security is (directly) reduced from standard (Ring-)LWE with a polynomial modulus. The core technique of our construction is what we call "error sharing". We distribute shares of a small error ζ via secret sharing, and use them to prevent leakage of ∥e∥ from partial decryptions.
Expand
Sönke Jendral
ePrint Report ePrint Report
CRYSTALS-Dilithium is a post-quantum secure digital signature algorithm currently being standardised by NIST. As a result, devices making use of CRYSTALS-Dilithium will soon become generally available and be deployed in various environments. It is thus important to assess the resistance of CRYSTALS-Dilithum implementations to physical attacks. In this paper, we present an attack on a CRYSTALS-Dilithium implementation in hedged mode in ARM Cortex-M4 using fault injection. Voltage glitching is performed to skip computation of a seed during the generation of the signature. We identified settings that consistently skip the desired function without crashing the device. After the successful fault injection, the resulting signature allows for the extraction of the secret key vector. Our attack succeeds with probability 0.582 in a single trace. We also propose countermeasures against the presented attack.
Expand
Hao Chung, Tim Roughgarden, Elaine Shi
ePrint Report ePrint Report
Users bid in a transaction fee mechanism (TFM) to get their transactions included and confirmed by a blockchain protocol. Roughgarden (EC'21) initiated the formal treatment of TFMs and proposed three requirements: user incentive compatibility (UIC), miner incentive compatibility (MIC), and a form of collusion-resilience called OCA-proofness. Ethereum's EIP-1559 mechanism satisfies all three properties simultaneously when there is no contention between transactions, but loses the UIC property when there are too many eligible transactions to fit in a single block. Chung and Shi (SODA'23) considered an alternative notion of collusion-resilience, called c-side-constract-proofness (c-SCP), and showed that, when there is contention between transactions, no TFM can satisfy UIC, MIC, and c-SCP for any c at least 1. OCA-proofness asserts that the users and a miner should not be able to "steal from the protocol" and is intuitively weaker than the c-SCP condition, which stipulates that a coalition of a miner and a subset of users should not be able to profit through strategic deviations (whether at the expense of the protocol or of the users outside the coalition).

Our main result is the first proof that, when there is contention between transactions, no (possibly randomized) direct-revelation TFM satisfies UIC, MIC, and OCA-proofness. This result resolves the main open question in Roughgarden(EC'21). We also suggest several relaxations of the basic model that allow our impossibility result to be circumvented.
Expand
Hart Montgomery, Sikhar Patranabis
ePrint Report ePrint Report
We show that key exchange and two-party computation are exactly equivalent to monoid actions with certain structural and hardness properties. To the best of our knowledge, this is the first "natural" characterization of the mathematical structure inherent to any key exchange or two-party computation protocol, and the first explicit proof of the necessity of mathematical structure for public-key cryptography. We then utilize these characterizations to show a new black-box separation result, while also achieving a simpler and more general version of an existing black-box separation result. Concretely, we obtain the following results:

- Two-Party Key Exchange. We show that that any two-party noninteractive key exchange protocol is equivalent to the existence of an abelian monoid equipped with a natural hardness property, namely (distributional) unpredictability. More generally, we show that any $k$-round (two-party) key exchange protocol is essentially equivalent to the existence of a (distributional) unpredictable monoid with certain commutator-like properties. We then use a generic version of this primitive to show a simpler and more general version of Rudich's (Crypto '91) black-box separation of $k$-round and $(k+1)$-round key exchange.

- Two-Party Computation. We show that any maliciously secure two-party computation protocol is also equivalent to a monoid action with commutator-like properties and certain hardness guarantees. We then use a generic version of this primitive to show a black-box separation between $k$-round semi-honest secure two-party computation and $(k+1)$-round maliciously secure two-party computation. This yields the first black-box separation (to our knowledge) between $k$-round and $(k+1)$-round maliciously secure two-party computation protocols.

We believe that modeling cryptographic primitives as mathematical objects (and our approach of using such modeling for black-box separations) may have many other potential applications and uses in understanding what sort of assumptions and mathematical structure are necessary for certain cryptoprimitives.
Expand
Miranda Christ, Sam Gunn
ePrint Report ePrint Report
We construct pseudorandom error-correcting codes (or simply pseudorandom codes), which are error-correcting codes with the property that any polynomial number of codewords are pseudorandom to any computationally-bounded adversary. Efficient decoding of corrupted codewords is possible with the help of a decoding key.

We build pseudorandom codes that are robust to substitution and deletion errors, where pseudorandomness rests on standard cryptographic assumptions. Specifically, pseudorandomness is based on either $2^{O(\sqrt{n})}$-hardness of LPN, or polynomial hardness of LPN and the planted XOR problem at low density.

As our primary application of pseudorandom codes, we present an undetectable watermarking scheme for outputs of language models that is robust to cropping and a constant rate of random substitutions and deletions. The watermark is undetectable in the sense that any number of samples of watermarked text are computationally indistinguishable from text output by the original model. This is the first undetectable watermarking scheme that can tolerate a constant rate of errors.

Our second application is to steganography, where a secret message is hidden in innocent-looking content. We present a constant-rate stateless steganography scheme with robustness to a constant rate of substitutions. Ours is the first stateless steganography scheme with provable steganographic security and any robustness to errors.
Expand
Manuel Barbosa, Kai Gellert, Julia Hesse, Stanislaw Jarecki
ePrint Report ePrint Report
In the past three decades, an impressive body of knowledge has been built around secure and private password authentication. In particular, secure password-authenticated key exchange (PAKE) protocols require only minimal overhead over a classical Diffie-Hellman key exchange. PAKEs are also known to fulfill strong composable security guarantees that capture many password-specific concerns such as password correlations or password mistyping, to name only a few. However, to enjoy both round-optimality and strong security, applications of PAKE protocols must provide unique session and participant identifiers. If such identifiers are not readily available, they must be agreed upon at the cost of additional communication flows, a fact which has been met with incomprehension among practitioners, and which hindered the adoption of provably secure password authentication in practice. In this work, we resolve this issue by proposing a new paradigm for truly password-only yet securely composable PAKE, called bare PAKE. We formally prove that two prominent PAKE protocols, namely CPace and EKE, can be cast as bare PAKEs and hence do not require pre-agreement of anything else than a password. Our bare PAKE modeling further allows us to investigate a novel "reusability" property of PAKEs, i.e., whether $n^2$ pairwise keys can be exchanged from only $n$ messages, just as the Diffie-Hellman non-interactive key exchange can do in a public-key setting. As a side contribution, this add-on property of bare PAKEs leads us to observe that some previous PAKE constructions relied on unnecessarily strong, "reusable" building blocks. By showing that ``non-reusable'' tools suffice for standard PAKE, we open a new path towards round-optimal post-quantum secure password-authenticated key exchange.
Expand
Vladimir Shpilrain, Bianca Sosnovski
ePrint Report ePrint Report
Cayley hash functions are based on a simple idea of using a pair of semigroup elements, $A$ and $B$, to hash the 0 and 1 bit, respectively, and then to hash an arbitrary bit string in the natural way, by using multiplication of elements in the semigroup. The main advantage of Cayley hash functions compared to, say, hash functions in the SHA family is that when an already hashed document is amended, one does not have to hash the whole amended document all over again, but rather hash just the amended part and then multiply the result by the hash of the original document. Some authors argued that this may be a security hazard, specifically that this property may facilitate finding a second preimage by splitting a long bit string into shorter pieces. In this paper, we offer a way to get rid of this alleged disadvantage and keep the advantages at the same time. We call this method ``Cayley hashing with cookies" using terminology borrowed from the theory of random walks in a random environment. For the platform semigroup, we use $2\times 2$ matrices over $F_p$.
Expand
◄ Previous Next ►