International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

16 February 2024

Hao Chung, Tim Roughgarden, Elaine Shi
ePrint Report ePrint Report
Users bid in a transaction fee mechanism (TFM) to get their transactions included and confirmed by a blockchain protocol. Roughgarden (EC'21) initiated the formal treatment of TFMs and proposed three requirements: user incentive compatibility (UIC), miner incentive compatibility (MIC), and a form of collusion-resilience called OCA-proofness. Ethereum's EIP-1559 mechanism satisfies all three properties simultaneously when there is no contention between transactions, but loses the UIC property when there are too many eligible transactions to fit in a single block. Chung and Shi (SODA'23) considered an alternative notion of collusion-resilience, called c-side-constract-proofness (c-SCP), and showed that, when there is contention between transactions, no TFM can satisfy UIC, MIC, and c-SCP for any c at least 1. OCA-proofness asserts that the users and a miner should not be able to "steal from the protocol" and is intuitively weaker than the c-SCP condition, which stipulates that a coalition of a miner and a subset of users should not be able to profit through strategic deviations (whether at the expense of the protocol or of the users outside the coalition).

Our main result is the first proof that, when there is contention between transactions, no (possibly randomized) direct-revelation TFM satisfies UIC, MIC, and OCA-proofness. This result resolves the main open question in Roughgarden(EC'21). We also suggest several relaxations of the basic model that allow our impossibility result to be circumvented.
Expand
Hart Montgomery, Sikhar Patranabis
ePrint Report ePrint Report
We show that key exchange and two-party computation are exactly equivalent to monoid actions with certain structural and hardness properties. To the best of our knowledge, this is the first "natural" characterization of the mathematical structure inherent to any key exchange or two-party computation protocol, and the first explicit proof of the necessity of mathematical structure for public-key cryptography. We then utilize these characterizations to show a new black-box separation result, while also achieving a simpler and more general version of an existing black-box separation result. Concretely, we obtain the following results:

- Two-Party Key Exchange. We show that that any two-party noninteractive key exchange protocol is equivalent to the existence of an abelian monoid equipped with a natural hardness property, namely (distributional) unpredictability. More generally, we show that any $k$-round (two-party) key exchange protocol is essentially equivalent to the existence of a (distributional) unpredictable monoid with certain commutator-like properties. We then use a generic version of this primitive to show a simpler and more general version of Rudich's (Crypto '91) black-box separation of $k$-round and $(k+1)$-round key exchange.

- Two-Party Computation. We show that any maliciously secure two-party computation protocol is also equivalent to a monoid action with commutator-like properties and certain hardness guarantees. We then use a generic version of this primitive to show a black-box separation between $k$-round semi-honest secure two-party computation and $(k+1)$-round maliciously secure two-party computation. This yields the first black-box separation (to our knowledge) between $k$-round and $(k+1)$-round maliciously secure two-party computation protocols.

We believe that modeling cryptographic primitives as mathematical objects (and our approach of using such modeling for black-box separations) may have many other potential applications and uses in understanding what sort of assumptions and mathematical structure are necessary for certain cryptoprimitives.
Expand
Miranda Christ, Sam Gunn
ePrint Report ePrint Report
We construct pseudorandom error-correcting codes (or simply pseudorandom codes), which are error-correcting codes with the property that any polynomial number of codewords are pseudorandom to any computationally-bounded adversary. Efficient decoding of corrupted codewords is possible with the help of a decoding key.

We build pseudorandom codes that are robust to substitution and deletion errors, where pseudorandomness rests on standard cryptographic assumptions. Specifically, pseudorandomness is based on either $2^{O(\sqrt{n})}$-hardness of LPN, or polynomial hardness of LPN and the planted XOR problem at low density.

As our primary application of pseudorandom codes, we present an undetectable watermarking scheme for outputs of language models that is robust to cropping and a constant rate of random substitutions and deletions. The watermark is undetectable in the sense that any number of samples of watermarked text are computationally indistinguishable from text output by the original model. This is the first undetectable watermarking scheme that can tolerate a constant rate of errors.

Our second application is to steganography, where a secret message is hidden in innocent-looking content. We present a constant-rate stateless steganography scheme with robustness to a constant rate of substitutions. Ours is the first stateless steganography scheme with provable steganographic security and any robustness to errors.
Expand
Manuel Barbosa, Kai Gellert, Julia Hesse, Stanislaw Jarecki
ePrint Report ePrint Report
In the past three decades, an impressive body of knowledge has been built around secure and private password authentication. In particular, secure password-authenticated key exchange (PAKE) protocols require only minimal overhead over a classical Diffie-Hellman key exchange. PAKEs are also known to fulfill strong composable security guarantees that capture many password-specific concerns such as password correlations or password mistyping, to name only a few. However, to enjoy both round-optimality and strong security, applications of PAKE protocols must provide unique session and participant identifiers. If such identifiers are not readily available, they must be agreed upon at the cost of additional communication flows, a fact which has been met with incomprehension among practitioners, and which hindered the adoption of provably secure password authentication in practice. In this work, we resolve this issue by proposing a new paradigm for truly password-only yet securely composable PAKE, called bare PAKE. We formally prove that two prominent PAKE protocols, namely CPace and EKE, can be cast as bare PAKEs and hence do not require pre-agreement of anything else than a password. Our bare PAKE modeling further allows us to investigate a novel "reusability" property of PAKEs, i.e., whether $n^2$ pairwise keys can be exchanged from only $n$ messages, just as the Diffie-Hellman non-interactive key exchange can do in a public-key setting. As a side contribution, this add-on property of bare PAKEs leads us to observe that some previous PAKE constructions relied on unnecessarily strong, "reusable" building blocks. By showing that ``non-reusable'' tools suffice for standard PAKE, we open a new path towards round-optimal post-quantum secure password-authenticated key exchange.
Expand
Vladimir Shpilrain, Bianca Sosnovski
ePrint Report ePrint Report
Cayley hash functions are based on a simple idea of using a pair of semigroup elements, $A$ and $B$, to hash the 0 and 1 bit, respectively, and then to hash an arbitrary bit string in the natural way, by using multiplication of elements in the semigroup. The main advantage of Cayley hash functions compared to, say, hash functions in the SHA family is that when an already hashed document is amended, one does not have to hash the whole amended document all over again, but rather hash just the amended part and then multiply the result by the hash of the original document. Some authors argued that this may be a security hazard, specifically that this property may facilitate finding a second preimage by splitting a long bit string into shorter pieces. In this paper, we offer a way to get rid of this alleged disadvantage and keep the advantages at the same time. We call this method ``Cayley hashing with cookies" using terminology borrowed from the theory of random walks in a random environment. For the platform semigroup, we use $2\times 2$ matrices over $F_p$.
Expand
Hyeonbum Lee, Jae Hong Seo
ePrint Report ePrint Report
Nova is a new type of recursive proof system that uses folding scheme as its core building block. This brilliant idea of folding relations can significantly reduce recursion overhead. In this paper, we study some issues related to Nova's soundness proof that relies on the soundness of the folding scheme in a recursive manner.

First, its proof strategy, due to its recursive nature, inevitably expands the running time of the recursive extractor polynomially for each additional recursive step. This constrains Nova's soundness model to have only logarithmically bounded recursive steps, and so the soundness proof in this limited model does not guarantee the soundness for linear rounds in the security parameter, e.g., 128 rounds for 128 bit security. On the other hand, there are no known attacks on arbitrary depth recursion of Nova, leaving a gap between theoretical security guarantees and real-world attacks. We aim to bridge this gap in two opposite directions. In a negative direction, we present a recursive proof system that is unforgeable in a log-round model but forgeable if used in linear rounds. This shows that the soundness proof in the log-round model might tell nothing about real-world uses that require linearly long rounds. In a positive direction, we show that when Nova uses a specific group-based folding scheme, its knowledge soundness over polynomial rounds can be proven in the algebraic group model with our refinements. To the best of our knowledge, this is the first result to show Nova's polynomial rounds soundness.

Second, the folding scheme is converted non-interactively via the Fiat-Shamir transformation and then arithmetized into R1CS. Therefore, the soundness of Nova using the non-interactive folding scheme essentially relies on the heuristic random oracle instantiation in the standard model. In our new soundness proof for Nova in the algebraic group model, we replace this heuristics with a cryptographic hash function with special property. We view this hash function as an independent object of interest and expect it to help further understand the soundness of Nova.
Expand
Eugene Frimpong, Alexandros Bakas, Camille Foucault, Antonis Michalas
ePrint Report ePrint Report
Functional Encryption (FE) is a cutting-edge cryptographic technique that enables a user with a specific functional decryption key to determine a certain function of encrypted data without gaining access to the underlying data. Given its potential and the fact that FE is still a relatively new field, we set out to investigate how it could be applied to resource-constrained environments. This work presents what we believe to be the first lightweight FE scheme explicitly designed for resource-constrained devices. We also propose a use case protocol that demonstrates how our scheme can secure an Internet of Things (IoT) architecture where relevant devices collect data and securely deliver them to a storage server, where an analyst can request access to the encrypted data. Finally, we conduct thorough experiments on two commercially available resource-constrained devices to provide compelling evidence of our approach's practicality and efficiency. Although the results of our evaluations show that there is room for improvement in the proposed scheme, this work represents one of the first attempts to apply FE to the IoT setting that can directly impact people's daily lives and the everyday operations of organizations.
Expand
Seongtaek Chee, Kyung Chul Jeong, Tanja Lange, Nari Lee, Alex Pellegrini, Hansol Ryu
ePrint Report ePrint Report
We analyse Layered ROLLO-I, a code-based cryptosystem submitted to the Korean post quantum cryptography competition, of which four versions have been proposed. We show that the first two versions do not provide the claimed security against rank decoding attacks and give reductions to small instances of ROLLO-I for which such attacks are even more effective. Finally, we provide two efficient message recovery attacks, affecting every security level of the first three versions of Layered ROLLO-I and security levels 128 and 192 of the fourth version.
Expand
Changrui Mu, Shafik Nassar, Ron D. Rothblum, Prashant Nalini Vasudevan
ePrint Report ePrint Report
A zero-knowledge proof enables a prover to convince a verifier that $x \in S$, without revealing anything beyond this fact. By running a zero-knowledge proof $k$ times, it is possible to prove (still in zero-knowledge) that $k$ separate instances $x_1,\dots,x_k$ are all in $S$. However, this increases the communication by a factor of $k$. Can one do better? In other words, is (non-trivial) zero-knowledge batch verification for $S$ possible?

Recent works by Kaslasi et al. (TCC 2020, Eurocrypt 2021) show that any problem possessing a non-interactive statistical zero-knowledge proof (NISZK) has a non-trivial statistical zero-knowledge batch verification protocol. Their results had two major limitations: (1) to batch verify $k$ inputs of size $n$ each, the communication in their batch protocol is roughly $\textrm{poly}(n,\log{k})+O(k)$, which is better than the naive cost of $k \cdot \textrm{poly}(n)$ but still scales linearly with $k$, and, (2) the batch protocol requires $\Omega(k)$ rounds of interaction.

In this work we remove both of these limitations by showing that any problem in $NISZK$ has a non-interactive statistical zero-knowledge batch verification protocol with communication $\textrm{poly}(n,\log{k})$.
Expand
Ren Taguchi, Atsushi Takayasu
ePrint Report ePrint Report
Thus far, several papers estimated concrete quantum resources of Shor’s algorithm for solving a binary elliptic curve discrete logarithm problem. In particular, the complexity of computing quantum inversions over a binary field F2n is dominant when running the algorithm, where n is a degree of a binary elliptic curve. There are two major methods for quantum inversion, i.e., the quantum GCD-based inversion and the quantum FLT-based inversion. Among them, the latter method is known to require more qubits; however, the latter one is valuable since it requires much fewer Toffoli gates and less depth. When n = 571, Kim-Hong’s quantum GCD-based inversion algorithm (Quantum Information Processing 2023) and Taguchi-Takayasu’s quantum FLT-based inversion algorithm (CT-RSA 2023) require 3, 473 qubits and 8, 566 qubits, respectively. In contrast, for the same n = 571, the latter algorithm requires only 2.3% of Toffoli gates and 84% of depth compared to the former one. In this paper, we modify Taguchi-Takayasu’s quantum FLT-based inversion algorithm to reduce the required qubits. While Taguchi-Takayasu’s FLT-based inversion algorithm takes an addition chain for n−1 as input and computes a sequence whose number is the same as the length of the chain, our proposed algorithm employs an uncomputation step and stores a shorter one. As a result, our proposed algorithm requires only 3, 998 qubits for n = 571, which is only 15% more than Kim-Hong’s GCD-based inversion algorithm. Furthermore, our proposed algorithm preserves the advantage of FLT-based inversion since it requires only 3.7% of Toffoli gates and 77% of depth compared to Kim-Hong’s GCD-based inversion algorithm for n = 571.
Expand
Surya Mathialagan, Spencer Peters, Vinod Vaikuntanathan
ePrint Report ePrint Report
We study succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs) for the class $\mathsf{UP}$ in the reusable designated verifier model. $\mathsf{UP}$ is an expressive subclass of $\mathsf{NP}$ consisting of all $\mathsf{NP}$ languages where each instance has at most one witness; a designated verifier SNARG (dvSNARG) is one where verification of the SNARG proof requires a private verification key; and such a dvSNARG is reusable if soundness holds even against a malicious prover with oracle access to the (private) verification algorithm. Our main results are as follows.

(1) A reusably and adaptively sound zero-knowledge (zk) dvSNARG for $\mathsf{UP}$, from subexponential LWE and evasive LWE (a relatively new but popular variant of LWE). Our SNARGs achieve very short proofs of length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error.

(2) A generic transformation that lifts any ``Sahai-Waters-like'' (zk) SNARG to an adaptively sound (zk) SNARG, in the designated-verifier setting. In particular, this shows that the Sahai-Waters SNARG for $\mathsf{NP}$ is adaptively sound in the designated verifier setting, assuming subexponential hardness of the underlying assumptions. The resulting SNARG proofs have length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error. Our result sidesteps the Gentry-Wichs barrier for adaptive soundness by employing an exponential-time security reduction.

(3) A generic transformation that lifts any adaptively sound (zk) SNARG for $\mathsf{UP}$ to an adaptively sound (zk) SNARK for $\mathsf{UP}$, while preserving zero-knowledge. The resulting SNARK achieves the strong notion of black-box extraction. There are barriers to achieving such SNARKs for all of $\mathsf{NP}$ from falsifiable assumptions, so our restriction to $\mathsf{UP}$ is, in a sense, necessary.

Applying (3) to our SNARG for $\mathsf{UP}$ from evasive LWE (1), we obtain a reusably and adaptively sound designated-verifier zero-knowledge SNARK for $\mathsf{UP}$ from subexponential LWE and evasive LWE. Moreover, applying both (2) and (3) to the Sahai-Waters SNARG, we obtain the same result from LWE, subexponentially secure one-way functions, and subexponentially secure indistinguishability obfuscation. Both constructions have succinct proofs of size $\mathsf{poly}(\lambda)$. These are the first SNARK constructions (even in the designated-verifier setting) for a non-trivial subset of $\mathsf{NP}$ from (sub-exponentially) falsifiable assumptions.
Expand
Keita Emura, Shingo Sato, Atsushi Takayasu
ePrint Report ePrint Report
Keyed homomorphic public key encryption (KHPKE) is a variant of homomorphic public key encryption, where only users who have a homomorphic evaluation key can perform a homomorphic evaluation. Then, KHPKE satisfies the CCA2 security against users who do not have a homomorphic evaluation key, while it satisfies the CCA1 security against users who have the key. Thus far, several KHPKE schemes have been proposed under the standard Diffie-Hellman-type assumptions and keyed fully homomorphic encryption (KFHE) schemes have also been proposed from lattices. As a natural extension, there is an identity-based variant of KHPKE; however, the security is based on a $q$-type assumption and there are no attribute-based variants. Moreover, there are no identity-based variants of KFHE schemes due to the complex design of the known KFHE schemes. In this paper, we obtain two results for constructing the attribute-based variants. First, we propose an attribute-based KFHE (ABKFHE) scheme from lattices. We start by designing the first KFHE scheme secure solely under the LWE assumption in the standard model. Since the design is conceptually much simpler than known KFHE schemes, we just replace their building blocks with attribute-based ones and obtain the proposed ABKFHE schemes. Next, we propose an efficient attribute-based KHPKE (ABKHE) scheme from a pair encoding scheme (PES). Due to the benefit of PES, we obtain various ABKHE schemes that contain the first identity-based KHPKE scheme secure under the standard $k$-linear assumption and the first pairing-based ABKHE schemes supporting more expressive predicates.
Expand
Yilei Chen, Xinyu Mao
ePrint Report ePrint Report
Universal computational extractors (UCEs), introduced by Bellare, Hoang, and Keelveedhi [BHK13], can securely replace random oracles in various applications, including KDM-secure encryption, deterministic encryption, RSA-OAEP, etc. Despite its usefulness, constructing UCE in the standard model is challenging. The only known positive result is given by Brzuska and Mittelbach [BM14], who construct UCE with strongly computationally unpredictable one-query source assuming indistinguishability obfuscation (iO) and the existence of point obfuscators with auxiliary input (AIPO); they also construct UCE with $q$-query sources assuming iO and composable AIPO. On the other hand, Brzuska, Farshim, and Mittelbach [BFM14] show that the most potent version of UCE does not exist, assuming the existence of iO.

In this paper, we construct UCE with strongly computationally unpredictable one-query sources from lattice assumptions based on the GGH15 encodings [GGH15], without using iO. Security is proven under the following assumptions: (1) LWE with subexponential hardness; (2) evasive LWE, which is a new assumption proposed by Wee [Wee22]; (3) the existence of AIPO in NC1. Our UCE directly implies a universal hardcore function that outputs a polynomial number of bits, giving the first lattice-based universal hardcore function without using iO. We also put forth a new primitive called obliviously programmable function as an intermediate abstraction; it makes our analysis more modularized and could be of independent interest
Expand
Nir Bitansky, Nathan Geier
ePrint Report ePrint Report
In an (α,β)-weak non-interactive zero knowledge (NIZK), the soundness error is at most α and the zero-knowledge error is at most β. Goyal, Jain, and Sahai (CRYPTO 2019) show that if α+β<1 for some constants α,β, then (α,β)-weak NIZK can be turned into fully-secure NIZK, assuming sub-exponentially-secure public-key encryption. We revisit the problem of NIZK amplification: – We amplify NIZK arguments assuming only polynomially-secure public-key encryption, for any constants α+β<1. – We amplify NIZK proofs assuming only one-way functions, for any constants α+β<1. – When the soundness error α is negligible to begin with, we can also amplify NIZK arguments assuming only one-way functions. Our results are based on the hidden-bits paradigm, and can be viewed as a reduction from NIZK amplification to the better understood problem of pseudorandomness amplification.
Expand
Sri Aravinda Krishnan Thyagarajan, Ke Wu, Pratik Soni
ePrint Report ePrint Report
Cleve's celebrated result (STOC'86) showed that a strongly fair multi-party coin-toss is impossible in the presence of majority-sized coalitions. Recently, however, a fascinating line of work studied a relaxed fairness notion called \emph{game-theoretic fairness}, which guarantees that no coalition should be incentivized to deviate from the prescribed protocol. A sequence of works has explored the feasibility of game-theoretic fairness for \emph{two-sided} coin-toss, and indeed demonstrated feasibility in the dishonest majority setting under standard cryptographic assumptions. In fact, the recent work of Wu, Asharov, and Shi (EUROCRYPT'22) completely characterized the regime where game-theoretic fairness is feasible. However, this line of work is largely restricted to two-sided coin-toss, and more precisely on a \emph{uniform} coin-toss (i.e., Bernoulli with parameter $1/2$). The only exceptions are the works on game-theoretically fair leader election, which can be viewed as a special case of uniform $n$-sided coin-toss where $n$ is the number of parties.

In this work, we \emph{initiate} the comprehensive study of game-theoretic fairness for multi-party \emph{sampling from general distributions}. In particular, for the case of $m$-sided \emph{uniform} coin-toss we give a nearly complete characterization of the regime in which game-theoretic fairness is feasible. Interestingly, contrary to standard fairness notions in cryptography, the composition of game-theoretically fair two-sided coin-toss protocols does not necessarily yield game-theoretically fair multi-sided coins. To circumvent this, we introduce new techniques compatible with game-theoretic fairness. In particular, we give the following results:

- We give a protocol from standard cryptographic assumptions that achieves game-theoretic fairness for uniform $m$-sided coin-toss against half- or more-sized adversarial coalitions. - To complement our protocol, we give a general impossibility result that establishes the optimality of our protocol for a broad range of parameters modulo an additive constant. Even in the worst-case, the gap between our protocol and our impossibility result is only a small constant multiplicative factor.

- We also present a game-theoretically fair protocol for \emph{any} efficiently sampleable $m$-outcome distribution in the dishonest majority setting. For instance, even for the case of $m=2$ (i.e., two-sided coin-toss), our result implies a game-theoretically fair protocol for an \emph{arbitrary} Bernoulli coin. In contrast, the work of Wu, Asharov, and Shi only focussed on a Bernoulli coin with parameter $1/2$.
Expand
Clémence Chevignard, Pierre-Alain Fouque, André Schrottenloher
ePrint Report ePrint Report
This paper focuses on the optimization of the number of logical qubits in Shor's quantum factoring algorithm. As in previous works, we target the implementation of the modular exponentiation, which is the most costly component of the algorithm, both in qubits and operations.

In this paper, we show that using only $o(n)$ work qubits, one can obtain the first bit of the modular exponentiation output. We combine this result with May and Schlieper's truncation technique (ToSC 2022) and the Ekerå-Håstad variant of Shor's algorithm (PQCrypto 2017) to obtain a quantum factoring algorithm requiring only $n/2 + o(n)$ qubits in the case of an $n$-bit RSA modulus, while current envisioned implementations require about $2n$ qubits.

Our algorithm uses a Residue Number System and succeeds with a parametrizable probability. Being completely classical, we have implemented and tested it. Among possible trade-offs, we can reach a gate count $\mathcal{O}(n^3)$ for a depth $\mathcal{O}(n^2 \log^3 n)$, which then has to be multiplied by $\mathcal{O}(\log n)$ (the number of measurement results required by Ekerå-Håstad). Preliminary logical resource estimates suggest that this circuit could be engineered to use less than 1700 qubits and $2^{36}$ Toffoli gates, and require 60 independent runs to factor an RSA-2048 instance.
Expand
Dimitris Mouris, Christopher Patton, Hannah Davis, Pratik Sarkar, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
Insight into user experience and behavior is critical to the success of large software systems and web services. Yet gaining such insights, while preserving user privacy, is a significant challenge. Recent advancements in multi-party computation have made it practical to compute verifiable aggregates over secret shared data. One important use case for these protocols is heavy hitters, where the servers compute the most popular inputs held by the users without learning the inputs themselves. The Poplar protocol (IEEE S&P 2021) focuses on this use case, but cannot support other aggregation tasks. Another such protocol, Prio (NSDI 2017), supports a wider variety of statistics but is unsuitable for heavy hitters.

We introduce Mastic, a flexible protocol for private and verifiable general-purpose statistics based on function secret sharing and zero-knowledge proofs on secret shared data. Mastic is the first to solve the more general problem of weighted heavy-hitters, enabling new use cases, not supported by Prio or Poplar. In addition, Mastic allows grouping general-purpose metrics by user attributes, such as their geographic location or browser version, without sacrificing privacy or incurring high-performance costs, which is a major improvement over Prio. We demonstrate Mastic's benefits with two real-world applications, private network error logging and browser telemetry, and compare our protocol with Prio and Poplar on a wide area network. Overall, we report over one order of magnitude performance improvement over Poplar for heavy hitters and $1.5-2\times$ improvement over Prio for attribute-based metrics.
Expand
John Preuß Mattsson
ePrint Report ePrint Report
One-way key chains or ratchets play a vital role in numerous important security protocols such as TLS 1.3, QUIC, Signal, MLS, EDHOC, and OSCORE. Despite the crucial role they play, very little is known about their security properties. This paper categorizes and examines different key chain constructions, offering a comprehensive overview of their security. Our analysis reveals notable distinctions among the number of collisions occurring within chains, between chains, and between a chain and a random set. Notably, the type of key chain used in protocols such as TLS 1.3 and Signal exhibit a significant number of weak keys, an unexpectedly high rate of key collisions surpassing birthday attack expectations, and a predictable shrinking key space susceptible to novel Time-Memory Trade-Off (TMTO) attacks with complexity $\le N^{1/3}$, which is well within the capabilities of current supercomputers and distributed systems. Consequently, the security level provided by e.g., TLS 1.3 is significantly lower than anticipated based on key sizes. To address these concerns, we analyze the aforementioned protocols and provide numerous concrete recommendations for enhancing their security, as well as guidance for future security protocol design.
Expand
Pierre Pébereau
ePrint Report ePrint Report
In this work, we study the singular locus of the varieties defined by the public keys of UOV and VOX, two multivariate quadratic signature schemes submitted to the additional NIST call for signature schemes. Singular points do not exist for generic quadratic systems, which enables us to introduce two new algebraic attacks against UOV-based schemes. We show that they can be seen as an algebraic variant of the Kipnis-Shamir attack, which can be obtained in our framework as an enumerative approach of solving a bihomogeneous modeling of the computation of singular points. This allows us to highlight some heuristics implicitly relied on by the Kipnis-Shamir attack.

We give new attacks for UOV$^{\hat +}$ and VOX targeting singular points of the public key equations. Our attacks lower the security of the schemes, both asymptotically and in number of gates, showing in particular that the parameters sets proposed for these schemes do not meet the NIST security requirements. More precisely, we show that the security of UOV$^{\hat +}$ was overestimated by factors $2^{22}, 2^{36}, 2^{59}$ for security levels $I, III, V$ respectively. We conclude the attack on VOX by showing that an attacker can perform a full key recovery from one vector obtained in the previous attacks.
Expand
Mustafa Khairallah, Srinivasan Yadhunathan, Shivam Bhasin
ePrint Report ePrint Report
In this paper, we propose a leakage-resilient pseudo-random number generator (PRNG) design that leverages the rekeying techniques of the PSV-Enc encryption scheme and the superposition property of the Superposition-Tweak-Key (STK) framework. The random seed of the PRNG is divided into two parts; one part is used as an ephemeral key that changes every two calls to a tweakable block cipher (TBC), and the other part is used as a static long-term key. Using the superposition property, we show that it is possible to eliminate observable leakage by only masking the static key. Thus, our proposal itself can be seen as a superposition of masking and rekeying. We show that our observations can be used to design an unpredictable-with-leakage PRNG as long as the static key is protected, and the ephemeral key cannot be attacked with 2 traces. Our construction enjoys better theoretical security arguments than PSV-Enc; better Time-Data trade-off and leakage assumptions, using the recently popularized unpredictability with leakage. We verify our proposal by performing Test Vector Leakage Assessment (TVLA) on an STK-based TBC (\deoxys) operated with a fixed key and a dynamic random tweak. Our results show that while the protection of the static key is non-trivial, it only requires $\approx 10\%$ overhead for first-order protection in the most conservative setting, unlike traditional masking which may require significant overheads of $300\%$ or more.
Expand
◄ Previous Next ►