International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

10 May 2024

Gennady Khalimov, Yevgen Kotukh, Maksym Kolisnyk, Svitlana Khalimova, Oleksandr Sievierinov
ePrint Report ePrint Report
The discourse herein pertains to a directional encryption cryptosystem predicated upon logarithmic signatures interconnected via a system of linear equations (we call it LINE). A logarithmic signature serves as a foundational cryptographic primitive within the algorithm, characterized by distinct cryptographic attributes including nonlinearity, noncommutativity, unidirectionality, and factorizability by key. The confidentiality of the cryptosystem is contingent upon the presence of an incomplete system of equations and the substantial ambiguity inherent in the matrix transformations integral to the algorithm. Classical cryptanalysis endeavors are constrained by the potency of the secret matrix transformation and the indeterminacy surrounding solutions to the system of linear equations featuring logarithmic signatures. Such cryptanalysis methodologies, being exhaustive in nature, invariably exhibit exponential complexity. The absence of inherent group computations within the algorithm, and by extension, the inability to exploit group properties associated with the periodicity of group elements, serves to mitigate quantum cryptanalysis to Grover's search algorithm. LINE is predicated upon an incomplete system of linear equations embodies the security levels ranging from 1 to 5, as stipulated by the NIST, and thus presents a promising candidate for the construction of post-quantum cryptosystems.
Expand
Victor Shoup
ePrint Report ePrint Report
The Asynchronous Common Subset (ACS) problem is a fundamental problem in distributed computing. Very recently, Das et al. (2024) developed a new ACS protocol with several desirable properties: (i) it provides optimal resilience, tolerating up to $t < n/3$ corrupt parties out of $n$ parties in total, (ii) it does not rely on a trusted set up, (iii) it utilizes only "lighweight" cryptography, which can be instantiated using just a hash function, and (iv) it has expected round complexity $O(1)$ and expected communication complexity $O(\kappa n^3)$, where $\kappa$ is the output-length of the hash function. The purpose of this paper is to give a detailed, self-contained exposition and analysis of this protocol from the point of view of modern theoretcal cryptography, fleshing out a number of details of the definitions and proofs, providing a complete security analysis based on concrete security assumptions on the hash function (i.e., without relying on random oracles), and developing all of the underlying theory in the universal composability framework.
Expand
Richard Wassmer
ePrint Report ePrint Report
This paper's purpose is to give a new method of analyzing Beale Cipher 1 and Cipher 3 and to show that there is no key which will decipher them into sentences. Previous research has largely used statistical methods to either decipher them or prove they have no solution. Some of these methods show that there is a high probability, but not certainty that they are unsolvable. Both ciphers remain unsolved. The methods used in this paper are not statistical ones based on thousands of samples. The evidence given here shows there is a high correlation between locations of certain numbers in the ciphers with locations in the written text that was given with these ciphers in the 1885 pamphlet called "The Beale Papers". Evidence is correlated with a long monotonically increasing Gillogly String in Cipher 1, when translated with the Declaration of Independence given in the pamphlet. The Beale Papers' writer was anonymous, and words in the three written letters in the 1885 pamphlet are compared with locations of numbers in the ciphers to show who the writer was. Emphasis is on numbers which are controllable by the encipherer. Letter location sums are used when they are the most plausible ones found. Evidence supports the statement that Cipher 1 and Cipher 3 are unintelligible. It also supports the statement that they were designed to have no intelligible sentences because they were part of a complex game made by the anonymous writer of The Beale Papers.
Expand
Jesko Dujmovic, Mohammad Hajiabadi
ePrint Report ePrint Report
Private information retrieval (PIR) is a fundamental cryptographic primitive that allows a user to fetch a database entry without revealing to the server which database entry it learns. PIR becomes non-trivial if the server communication is less than the database size. We show that building (even) very weak forms of single-server PIR protocols, without pre-processing, requires the number of public-key operations to scale linearly in the database size. This holds irrespective of the number of symmetric-key operations performed by the parties. We then use this bound to examine the related problem of communication efficient oblivious transfer (OT) extension. Oblivious transfer is a crucial building block in secure multi-party computation (MPC). In most MPC protocols, OT invocations are the main bottleneck in terms of computation and communication. OT extension techniques allow one to minimize the number of public-key operations in MPC protocols. One drawback of all existing OT extension protocols is their communication overhead. In particular, the sender’s communication is roughly double what is information-theoretically optimal. We show that OT extension with close to optimal sender communication is impossible, illustrating that the communication overhead is inherent. Our techniques go much further; we can show many lower bounds on communication-efficient MPC. E.g., we prove that to build high-rate string OT from generic groups, the sender needs to do linearly many group operations
Expand
Pierre Briaud
ePrint Report ePrint Report
Recently, [eprint/2024/250] and [eprint/2024/347] proposed two algebraic attacks on the $\mathsf{Anemoi}$ permutation [Crypto '23]. In this note, we construct a Gröbner basis for the ideal generated by the naive modeling of the $\mathsf{CICO}$ problem associated to $\mathsf{Anemoi}$, in odd and in even characteristics, for one and several branches. We also infer the degree of the ideal from this Gröbner basis, while previous works relied on upper bounds.
Expand

06 May 2024

Lukas Aumayr, Zeta Avarikioti, Matteo Maffei, Giulia Scaffino, Dionysis Zindros
ePrint Report ePrint Report
Designing light clients for Proof-of-Work blockchains has been a foundational problem since Nakamoto's SPV construction in the Bitcoin paper. Over the years, communication was reduced from O(C) down to O(polylog(C)) in the system's lifetime C. We present Blink, the first provably secure O(1) light client that does not require a trusted setup.
Expand
Alex Charlès, Aleksei Udovenko
ePrint Report ePrint Report
This work proposes a new white-box attack technique called filtering, which can be combined with any other trace-based attack method. The idea is to filter the traces based on the value of an intermediate variable in the implementation, aiming to fix a share of a sensitive value and degrade the security of an involved masking scheme.

Coupled with LDA (filtered LDA, FLDA), it leads to an attack defeating the state-of-the-art SEL masking scheme (CHES 2021) of arbitrary degree and number of linear shares with quartic complexity in the window size. In comparison, the current best attacks have exponential complexities in the degree (higher degree decoding analysis, HDDA), in the number of linear shares (higher-order differential computation analysis, HODCA), or the window size (white-box learning parity with noise, WBLPN).

The attack exploits the key idea of the SEL scheme - an efficient parallel combination of the nonlinear and linear masking schemes. We conclude that a proper composition of masking schemes is essential for security.

In addition, we propose several optimizations for linear algebraic attacks: redundant node removal (RNR), optimized parity check matrix usage, and chosen-plaintext filtering (CPF), significantly improving the performance of security evaluation of white-box implementations.
Expand
Alex Charlès, Aleksei Udovenko
ePrint Report ePrint Report
In white-box cryptography, early protection techniques have fallen to the automated Differential Computation Analysis attack (DCA), leading to new countermeasures and attacks. A standard side-channel countermeasure, Ishai-Sahai-Wagner's masking scheme (ISW, CRYPTO 2003) prevents Differential Computation Analysis but was shown to be vulnerable in the white-box context to the Linear Decoding Analysis attack (LDA). However, recent quadratic and cubic masking schemes by Biryukov-Udovenko (ASIACRYPT 2018) and Seker-Eisenbarth-Liskiewicz (CHES 2021) prevent LDA and force to use its higher-degree generalizations with much higher complexity.

In this work, we study the relationship between the security of these and related schemes to the Learning Parity with Noise (LPN) problem and propose a new automated attack by applying an LPN-solving algorithm to white-box implementations. The attack effectively exploits strong linear approximations of the masking scheme and thus can be seen as a combination of the DCA and LDA techniques. Different from previous attacks, the complexity of this algorithm depends on the approximation error, henceforth allowing new practical attacks on masking schemes that previously resisted automated analysis. We demonstrate it theoretically and experimentally, exposing multiple cases where the LPN-based method significantly outperforms LDA and DCA methods, including their higher-order variants.

This work applies the LPN problem beyond its usual post-quantum cryptography boundary, strengthening its interest in the cryptographic community, while expanding the range of automated attacks by presenting a new direction for breaking masking schemes in the white-box model.
Expand
Elijah Pelofske, Vincent Urias, Lorie M. Liebrock
ePrint Report ePrint Report
Generative pre-trained transformers (GPT's) are a type of large language machine learning model that are unusually adept at producing novel, and coherent, natural language. Notably, these technologies have also been extended to computer programming languages with great success. However, GPT model outputs in general are stochastic and not always correct. For programming languages, the exact specification of the computer code, syntactically and algorithmically, is strictly required in order to ensure the security of computing systems and applications. Therefore, using GPT models to generate computer code poses an important security risk -- while at the same time allowing for potential innovation in how computer code is generated. In this study the ability of GPT models to generate novel and correct versions, and notably very insecure versions, of implementations of the cryptographic hash function SHA-1 is examined. The GPT models Llama-2-70b-chat-hf, Mistral-7B-Instruct-v0.1, and zephyr-7b-alpha are used. The GPT models are prompted to re-write each function using a modified version of the localGPT framework and langchain to provide word embedding context of the full source code and header files to the model, resulting in over $130,000$ function re-write GPT output text blocks (that are potentially correct source code), approximately $40,000$ of which were able to be parsed as C code and subsequently compiled. The generated code is analyzed for being compilable, correctness of the algorithm, memory leaks, compiler optimization stability, and character distance to the reference implementation. Remarkably, several generated function variants have a high implementation security risk of being correct for some test vectors, but incorrect for other test vectors. Additionally, many function implementations were not correct to the reference algorithm of SHA-1, but produced hashes that have some of the basic characteristics of hash functions. Many of the function re-writes contained serious flaws such as memory leaks, integer overflows, out of bounds accesses, use of uninitialised values, and compiler optimization instability. Compiler optimization settings and SHA-256 hash checksums of the compiled binaries are used to cluster implementations that are equivalent but may not have identical syntax - using this clustering over $100,000$ distinct, novel, and correct versions of the SHA-1 codebase were generated where each component C function of the reference implementation is different from the original code.
Expand
Hoeteck Wee, David J. Wu
ePrint Report ePrint Report
A functional commitment allows a user to commit to an input $\mathbf{x}$ and later, open the commitment to an arbitrary function $\mathbf{y} = f(\mathbf{x})$. The size of the commitment and the opening should be sublinear in $|\mathbf{x}|$ and $|f|$.

In this work, we give the first pairing-based functional commitment for arbitrary circuits where the size of the commitment and the size of the opening consist of a constant number of group elements. Security relies on the standard bilateral $k$-$\mathsf{Lin}$ assumption. This is the first scheme with this level of succinctness from falsifiable bilinear map assumptions (previous approaches required SNARKs for $\mathsf{NP}$). This is also the first functional commitment scheme for general circuits with $\mathsf{poly}(\lambda)$-size commitments and openings from any assumption that makes fully black-box use of cryptographic primitives and algorithms. As an immediate consequence, we also obtain a succinct non-interactive argument for arithmetic circuits (i.e., a SNARG for $\mathsf{P}/\mathsf{poly}$) with a universal setup and where the proofs consist of a constant number of group elements. In particular, the CRS in our SNARG only depends on the size of the arithmetic circuit $|C|$ rather than the circuit $C$ itself; the same CRS can be used to verify computations with respect to different circuits. Our construction relies on a new notion of projective chainable commitments which may be of independent interest.
Expand
Nicholas Brandt
ePrint Report ePrint Report
Understanding the computational hardness of Kolmogorov complexity is a central open question in complexity theory. An important notion is Levin's version of Kolmogorov complexity, Kt, and its decisional variant, MKtP, due to its connections to universal search, derandomization, and oneway functions, among others. The question whether MKtP can be computed in polynomial time is particularly interesting because it is not subject to known technical barriers such as algebrization or natural proofs that would explain the lack of a proof for MKtP not in P.

We take a significant step towards proving MKtP not in P by developing an algorithmic approach for showing unconditionally that MKtP not in DTIME[O(n)] cannot be decided in deterministic linear time in the worst-case. This allows us to partially affirm a conjecture by Ren and Santhanam [STACS:RS22] about a non-halting variant of Kt complexity. Additionally, we give conditional lower bounds for MKtP that tolerate either more runtime or one-sided error.
Expand
Ian Malloy
ePrint Report ePrint Report
Introduced as a new protocol implemented in “Chrome Canary” for the Google Inc. Chrome browser, “New Hope” is engineered as a post-quantum key exchange for the TLS 1.2 protocol. The structure of the exchange is revised lattice-based cryptography. New Hope incorporates the key-encapsulation mechanism of Peikert which itself is a modified Ring-LWE scheme. The search space used to introduce the closest-vector problem is generated by an intersection of a tesseract and hexadecachoron, or the ℓ∞- ball and ℓ1-ball respectively. This intersection results in the 24-cell ? of lattice ?̃4. With respect to the density of the Voronoi cell ?, the proposed mitigation against backdoor attacks proposed by the authors of New Hope may not withstand such attempts if enabled by a quantum computer capable of implementing Grover’s search algorithm.
Expand
Nicolas Alhaddad, Leonid Reyzin, Mayank Varia
ePrint Report ePrint Report
Asynchronous Verifiable Information Dispersal (AVID) allows a dealer to disperse a message $M$ across a collection of server replicas consistently and efficiently, such that any future client can reliably retrieve the message $M$ if some servers fail. Since AVID was introduced by Cachin and Tessaro in 2005, several works improved the asymptotic communication complexity of AVID protocols. However, recent gains in communication complexity have come at the expense of sub-optimal storage, which is the dominant cost in long-term archiving. Moreover, recent works do not provide a mechanism to detect errors until the retrieval stage, which may result in completely wasted long-term storage if the dealer is malicious.

In this work, we contribute a new AVID construction that achieves optimal storage and guaranteed output delivery, without sacrificing on communication complexity during dispersal or retrieval. First, we introduce a technique that bootstraps from dispersal of a message with sub-optimal storage to one with optimal storage. Second, we define and construct an AVID protocol that is robust, meaning that all server replicas are guaranteed at dispersal time that their fragments will contribute toward retrieval of a valid message. Third, we add the new possibility that some server replicas may lose their fragment in between dispersal and retrieval (as is likely in the long-term archiving scenario). This allows us to rely on fewer available replicas for retrieval than are required for dispersal.
Expand
Lucien K. L. Ng, Panagiotis Chatzigiannis, Duc V. Le, Mohsen Minaei, Ranjit Kumaresan, Mahdi Zamani
ePrint Report ePrint Report
In recent years, many blockchain systems have progressively transitioned to proof-of-stake (PoS) con- sensus algorithms. These algorithms are not only more energy efficient than proof-of-work but are also well-studied and widely accepted within the community. However, PoS systems are susceptible to a particularly powerful "long-range" attack, where an adversary can corrupt the validator set retroactively and present forked versions of the blockchain. These versions would still be acceptable to clients, thereby creating the potential for double-spending. Several methods and research efforts have proposed counter- measures against such attacks. Still, they often necessitate modifications to the underlying blockchain, introduce heavy assumptions such as centralized entities, or prove inefficient for securely bootstrapping light clients. In this work, we propose a method of defending against these attacks with the aid of external servers running our protocol. Our method does not require any soft or hard-forks on the underlying blockchain and operates under reasonable assumptions, specifically the requirement of at least one honest server. Central to our approach is a new primitive called "Insertable Proof of Sequential Work" (InPoSW). Traditional PoSW ensures that a server performs computational tasks that cannot be parallelized and require a minimum execution time, effectively timestamping the input data. InPoSW additionally allows the prover to "insert" new data into an ongoing InPoSW instance. This primitive can be of independent interest for other timestamp applications. Compared to naively adopting prior PoSW schemes for In-PoSW, our construction achieves >22× storage reduction on the server side and >17900× communication cost reduction for each verification.
Expand
Zhengjun Cao, Lihua Liu
ePrint Report ePrint Report
We show the Seyhan-Akleylek key exchange protocol [J. Supercomput., 2023, 79:17859-17896] cannot resist offline dictionary attack and impersonation attack, not as claimed.
Expand
Wutichai Chongchitmate, Steve Lu, Rafail Ostrovsky
ePrint Report ePrint Report
Private Set Intersection (PSI) is a protocol where two parties with individually held confidential sets want to jointly learn (or secret-share) the intersection of these two sets and reveal nothing else to each other. In this paper, we introduce a natural extension of this notion to approximate matching. Specifically, given a distance metric between elements, an approximate PSI (Approx-PSI) allows to run PSI where ``close'' elements match. Assuming that elements are either ``close'' or sufficiently ``far apart'', we present an Approx-PSI protocol for Hamming distance that dramatically improves the overall efficiency compared to all existing approximate-PSI solutions. In particular, we achieve a near-linear $\tilde{O}(n)$ communication complexity, an improvement over the previously best-known $\tilde{O}(n^2)$. We also show Approx-PSI protocols for Edit distance (also known as Levenstein distance), Euclidean distance and angular distance by deploying results on low distortion embeddings to Hamming distance. The latter two results further imply secure Approx-PSI for other metrics such as cosine similarity metric. Our Approx-PSI for Hamming distance is up to 20x faster and communicating 30% less than best known protocols when (1) matching small binary vectors; or (2) matching large threshold; or (3) matching large input sets. We demonstrate that the protocol can be used to match similar images through spread spectrum of the images.
Expand
Aloni Cohen, David Bruce Cousins, Nicholas Genise, Erik Kline, Yuriy Polyakov, Saraswathy RV
ePrint Report ePrint Report
We construct an efficient proxy re-encryption (PRE) scheme secure against honest re-encryption attacks (HRA-secure) with precise concrete security estimates. To get these precise concrete security estimates, we introduce the tight, fine-grained noise-flooding techniques of Li et al. (CRYPTO'22) to RLWE-based (homomorphic) PRE schemes, as well as a mixed statistical-computational security to HRA security analysis. Our solution also supports homomorphic operations on the ciphertexts. Such homomorphism allows for advanced applications, e.g., encrypted computation of network statistics across networks and unlimited hops, in the case of full homomorphism, i.e., bootstrapping.

We implement our PRE scheme in the OpenFHE software library and apply it to a problem of secure multi-hop data distribution in the context of 5G virtual network slices. We also experimentally evaluate the performance of our scheme, demonstrating that the implementation is practical.

In addition, we compare our PRE method with other lattice-based PRE schemes and approaches to achieve HRA security. These achieve HRA security, but not in a tight, practical scheme such as our work. Further, we present an attack on the PRE scheme proposed in Davidson et al.'s (ACISP'19), which was claimed to achieve HRA security without noise flooding.
Expand
Ojaswi Acharya, Foteini Baldimtsi, Samuel Dov Gordon, Daniel McVicker, Aayush Yadav
ePrint Report ePrint Report
We propose a new notion of vector commitment schemes with proofs of (non-)membership that we call universal vector commitments. We show how to build them directly from (i) Merkle commitments, and (ii) a universal accumulator and a plain vector commitment scheme. We also present a generic construction for universal accumulators over large domains from any vector commitment scheme, using cuckoo hashing. Leveraging the aforementioned generic constructions, we show that universal vector commitment schemes are implied by plain vector commitments and cuckoo hashing.
Expand
Martin Feussner, Igor Semaev
ePrint Report ePrint Report
This work introduces DEFI - an efficient hash-and-sign digital signature scheme based on isotropic quadratic forms over a commutative ring of characteristic 0. The form is public, but the construction is a trapdoor that depends on the scheme's private key. For polynomial rings over integers and rings of integers of algebraic number fields, the cryptanalysis is reducible to solving a quadratic Diophantine equation over the ring or, equivalently, to solving a system of quadratic Diophantine equations over rational integers. It is still an open problem whether quantum computers will have any advantage in solving Diophantine problems.
Expand
Douglas Stebila, Spencer Wilson
ePrint Report ePrint Report
WebAuthn is a passwordless authentication protocol which allows users to authenticate to online services using public-key cryptography. Users prove their identity by signing a challenge with a private key, which is stored on a device such as a cell phone or a USB security token. This approach avoids many of the common security problems with password-based authentication.

WebAuthn's reliance on proof-of-possession leads to a usability issue, however: a user who loses access to their authenticator device either loses access to their accounts or is required to fall back on a weaker authentication mechanism. To solve this problem, Yubico has proposed a protocol which allows a user to link two tokens in such a way that one (the primary authenticator) can generate public keys on behalf of the other (the backup authenticator). With this solution, users authenticate with a single token, only relying on their backup token if necessary for account recovery. However, Yubico's protocol relies on the hardness of the discrete logarithm problem for its security and hence is vulnerable to an attacker with a powerful enough quantum computer.

We present a WebAuthn recovery protocol which can be instantiated with quantum-safe primitives. We also critique the security model used in previous analysis of Yubico's protocol and propose a new framework which we use to evaluate the security of both the group-based and the quantum-safe protocol. This leads us to uncover a weakness in Yubico's proposal which escaped detection in prior work but was revealed by our model. In our security analysis, we require the cryptographic primitives underlying the protocols to satisfy a number of novel security properties such as KEM unlinkability, which we formalize. We prove that well-known quantum-safe algorithms, including CRYSTALS-Kyber, satisfy the properties required for analysis of our quantum-safe protocol.
Expand
◄ Previous Next ►