International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

24 May 2024

Jiangxia Ge, Heming Liao, Rui Xue
ePrint Report ePrint Report
The One-Way to Hiding (O2H) theorem, first given by Unruh (J ACM 2015) and then restated by Ambainis et al. (CRYPTO 2019), is a crucial technique for solving the reprogramming problem in the quantum random oracle model (QROM). It provides an upper bound $d\cdot\sqrt{\epsilon}$ for the distinguisher's advantage, where $d$ is the query depth and $\epsilon$ denotes the advantage of a one-wayness attacker. Later, in order to obtain a tighter upper bound, Kuchta et al. (EUROCRYPT 2020) proposed the Measure-Rewind-Measure (MRM) technique and then proved the Measure-Rewind-Measure O2H (MRM-O2H) theorem, which provides the upper bound $d\cdot\epsilon$. They also proposed an open question: Can we combine their MRM technique with Ambainis et al.'s semi-classical oracle technique (CRYPTO 2019) or Zhandry's compressed oracle technique (CRYPTO 2019) to prove a new O2H theorem with an upper bound even tighter than $d\cdot\epsilon$?

In this paper, we give an affirmative answer for the above question. We propose a new technique named Measure-Rewind-Extract (MRE) by combining the MRM technique with the semi-classical oracle technique. By using MRE technique, we prove the Measure-Rewind-Extract O2H (MRE-O2H) theorem, which provides the upper bound $\sqrt{d}\cdot\epsilon$.

As an important application of our MRE-O2H theorem, for the $FO^{\cancel{\bot}}$, $FO_m^{\cancel{\bot}}$, $FO^{\bot}$ and $FO_m^\bot$ proposed by Hofheinz et al. (TCC 2017), i.e., the key encapsulation mechanism (KEM) variants of the Fujisaki-Okamoto transformation, we prove the following results in the QROM:

Their IND-CCA security can be reduced to the IND-CPA security of the underlying public key encryption (PKE) scheme without the square-root advantage loss. In particular, compared with the IND-CCA proof of $FO^{\cancel{\bot}}$ given by Kuchta et al. (EUROCRYPT 2020), ours removes the injectivity assumption and has a tighter security bound.

Under the assumption that the underlying PKE scheme is unique randomness recoverable, we for the first time prove that their IND-CCA security can be reduced to the OW-CPA security of the underlying PKE scheme without the square-root advantage loss.
Expand
Changrui Mu, Prashant Nalini Vasudevan
ePrint Report ePrint Report
In an Instance-Hiding Interactive Proof (IHIP) [Beaver et al. CRYPTO 90], an efficient verifier with a _private_ input x interacts with an unbounded prover to determine whether x is contained in a language L. In addition to completeness and soundness, the instance-hiding property requires that the prover should not learn anything about x in the course of the interaction. Such proof systems capture natural privacy properties, and may be seen as a generalization of the influential concept of Randomized Encodings [Ishai et al. FOCS 00, Applebaum et al. FOCS 04, Agrawal et al. ICALP 15], and as a counterpart to Zero-Knowledge proofs [Goldwasser et al. STOC 89].

We investigate the properties and power of such instance-hiding proofs, and show the following: 1. Any language with an IHIP is contained in AM/poly and coAM/poly. 2. If an average-case hard language has an IHIP, then One-Way Functions exist. 3. There is an oracle with respect to which there is a language that has an IHIP but not an SZK proof. 4. IHIP's are closed under composition with any efficiently computable function.

We further study a stronger version of IHIP (that we call Strong IHIP) where the view of the honest prover can be efficiently simulated. For these, we obtain stronger versions of some of the above: 5. Any language with a Strong IHIP is contained in AM and coAM. 6. If a _worst-case_ hard language has a Strong IHIP, then One-Way Functions exist.
Expand
Gal Horowitz, Eyal Ronen, Yuval Yarom
ePrint Report ePrint Report
Over the last two decades, microarchitectural side channels have been the focus of a large body of research on the development of new attack techniques, exploiting them to attack various classes of targets and designing mitigations. One line of work focuses on increasing the speed of the attacks, achieving higher levels of temporal resolution that can allow attackers to learn finer-grained information. The most recent addition to this line of work is Prime+Scope [CCS '21], which only requires a single access to the L1 cache to confirm the absence of victim activity in a cache set. While significantly faster than prior attacks, Prime+Scope is still an order of magnitude slower than cache access. In this work, we set out to close this gap.

We draw on techniques from research into microarchitectural weird gates, software constructs that exploit transient execution to perform arbitrary computation on cache state. We design the Spec-o-Scope gate, a new weird gate that performs 10 cache probes in quick succession, which forms the basis for our eponymous attack. Our Spec-o-Scope attack achieves an order of magnitude improvement in temporal resolution compared to the previous state-of-the-art of Prime+Scope, reducing the measurement time from ~70 cycles to only 5 --- only one cycle more than an L1 cache access. We experimentally verify that our attack can detect timing differences in a 5 cycle resolution. Finally, using our Spec-o-Scope attack, we are able to show the first microarchitectural side-channel attack on an unmodified AES S-box-based implementation, which uses generic CPU features and does not require manipulation of the operating system's scheduler.
Expand
Yackolley Amoussou-Guenou, Lionel Beltrando, Maurice Herlihy, Maria Potop-Butucaru
ePrint Report ePrint Report
Byzantine Reliable Broadcast is one of the most popular communication primitives in distributed systems. Byzantine reliable broadcast ensures that processes agree to deliver a message from an initiator even if some processes (perhaps including the initiator) are Byzantine. In asynchronous settings it is known since the prominent work of Bracha [Bracha87] that Byzantine reliable broadcast can be implemented deterministically if $n \geq 3t+1$ where $t$ is an upper bound on the number of Byzantine processes. Here, we study Byzantine Reliable Broadcast when processes are equipped with trusted execution environments (TEEs), special software or hardware designed to prevent equivocation. Our contribution is twofold. First, we show that, despite common belief, when each process is equipped with a TEE, Bracha's algorithm still needs $n \geq 3t+1$. Second, we present a novel algorithm that uses a single TEE (at the initiator) that implements Byzantine Reliable Asynchronous Broadcast with $n \geq 2t+1$.
Expand

22 May 2024

Max Duparc, Tako Boris Fouotsa
ePrint Report ePrint Report
We introduce SQIPrime, a post-quantum digital signature scheme based on the Deuring correspondence and Kani's Lemma. Compared to its predecessors that are SQISign and especially SQISignHD, SQIPrime further expands the use of high dimensional isogenies, already in use in the verification in SQISignHD, to both key generation and commitment. In doing so, it no longer relies on smooth degree isogenies (of dimension 1). SQIPrime operates with a prime number of the form $p = 2^\alpha f-1$, as opposed to SQISignHD that uses SIDH primes.

The most intriguing novelty in SQIPrime is the use of non-smooth degree isogenies as challenge isogeny. In fact, in the SQISign family identification scheme, the challenge isogeny is computed by the verifier, who is not well-equipped to compute an isogeny of large non-smooth degree. To overcome this obstacle, the verifier samples the kernel of the challenge isogeny and the task of computing this isogeny is accomplished by the prover. The response is modified in such a way that the verifier can check that his challenge isogeny was correctly computed by the prover, on top of verifying the usual response in the SQISign family.

We describe two variants of SQIPrime: SQIPrime4D which uses dimension 4 isogenies to represent the response isogeny, and SQIPrime2D which solely uses dimension 2 isogenies to represent the response isogeny and hence is more efficient compared to SQIPrime4D and to SQISignHD.
Expand
Oriol Farràs, Miquel Guiot
ePrint Report ePrint Report
A secret sharing scheme is a cryptographic primitive that allows a dealer to share a secret among a set of parties, so that only authorized subsets of them can recover it. The access structure of the scheme is the family of authorized subsets.

In a weighted threshold access structure, each party is assigned a weight according to its importance, and the authorized subsets are those in which the sum of their weights is at least the threshold value. For these access structures, the share size of the best known secret sharing schemes is either linear on the weights or quasipolynomial on the number of parties, which leads to long shares, in general.

In certain settings, a way to circumvent this efficiency problem is to approximate the access structure by another one that admits more efficient schemes. This work is dedicated to the open problem posed by this strategy: Finding secret sharing schemes with a good tradeoff between the efficiency and the accuracy of the approximation.

We present a method to approximate weighted threshold access structures by others that admit schemes with small shares. This method is based on the techniques for the approximation of the Chow parameters developed by De et al. [Journal of the ACM, 2014]. Our method provides secret sharing schemes with share size $n^{1+o(1)}$, where $n$ is the number of parties, and whose access structure is close to the original one. Namely, in this approximation the condition of being authorized or not is preserved for almost all subsets of parties.

In addition, applying the recent results on computational secret sharing schemes by Applebaum et al. [STOC, 2023] we show that there exist computational secret sharing schemes whose security is based on the RSA assumption and whose share size is polylogarithmic in the number of parties.
Expand
Kohei Nakagawa, Hiroshi Onuki
ePrint Report ePrint Report
Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition. SQIsign has attracted much attention because of its very short signature and key size among the candidates for the NIST PQC standardization. Recently, a lot of new schemes have been proposed that use high-dimensional isogenies. Among them, the signature scheme called SQIsignHD has an even shorter signature size than SQIsign. However, it requires 4-dimensional isogeny computations for the signature verification.

In this paper, we propose a new signature scheme, SQIsign2D-East, which requires only two-dimensional isogeny computations for verification, thus reducing the computational cost of verification. First, we generalized an algorithm called RandIsogImg, which computes a random isogeny of non-smooth degree. Then, by using this generalized RandIsogImg, we construct a new signature scheme SQIsign2D-East.
Expand
Andreea B. Alexandru, Julian Loss, Charalampos Papamanthou, Giorgos Tsimos, Benedikt Wagner
ePrint Report ePrint Report
Byzantine broadcast is one of the fundamental problems in distributed computing. Many of its practical applications, from multiparty computation to consensus mechanisms for blockchains, require increasingly weaker trust assumptions, as well as scalability for an ever-growing number of users $n$. This rules out existing solutions which run in a linear number of rounds in $n$ or rely on trusted setup requirements. In this paper, we propose the first sublinear-round and trustless Byzantine broadcast protocol for the dishonest majority setting. Unlike previous sublinear-round protocols, our protocol assumes neither the existence of a trusted dealer who honestly issues keys and correlated random strings to the parties nor random oracles. Instead, we present a solution whose setup is limited to an unstructured uniform reference string and a plain public key infrastructure (a.k.a. bulletin-board PKI). Our broadcast protocol builds on top of a moderated gradecast protocol which parties can use to reach weak agreement on shared random strings. Using these strings, we can then run in an unbiased fashion a committee-based Byzantine protocol, similar to that of Chan et al. (PKC 2020), which terminates in a sublinear number of rounds. To this end, we propose a novel construction for committee election, which does not rely either on random oracles or on a trusted setup, and uses NIZKs and time-lock puzzles. Our protocol is resilient against an adaptive adversary who corrupts any constant fraction of parties.
Expand
Damiano Abram, Lawrence Roy, Mark Simkin
ePrint Report ePrint Report
The assumption that certain computations inherently require some sequential time has established itself as a powerful tool for cryptography. It allows for security and liveness guarantees in distributed protocols that are impossible to achieve with classical hardness assumptions. Unfortunately, many constructions from the realm of time-based cryptography are based on new and poorly understood hardness assumptions, which tend not to stand the test of time (cf. Leurent et al. 2023, Peikert & Tang 2023). In this work, we make progress on several fronts. We formally define the concept of a delay function and present a construction thereof from minimal assumptions. We show that these functions, in combination with classical cryptographic objects that satisfy certain efficiency criteria, would allow for constructing delay encryption, which is otherwise only known to exist based on a new hardness assumption about isogenies. We formally define randomness beacons as they are used in the context of blockchains, and we show that (linearly homomorphic) time-lock puzzles allow for efficiently constructing them. Finally, we resolve an open question of Malavolta and Thyagarajan by constructing fully homomorphic time- lock puzzles from linearly homomorphic time-lock puzzles and multi-key fully homomorphic encryption, whereas their previous construction required indistinguishability obfuscation. Our work puts time-based cryptography on a firmer theoretical footing, provides new constructions from simpler assumptions, and opens new avenues for constructing delay encryption.
Expand
Denis Firsov, Benjamin Livshits
ePrint Report ePrint Report
Verifying the verifier in the context of zero-knowledge proof is an essential part of ensuring the long-term integrity of the zero-knowledge ecosystem. This is vital for both zero-knowledge rollups and also other industrial applications of ZK. In addition to further minimizing the required trust and reducing the trusted computing base (TCB), having a verified verifier opens the door to decentralized proof generation by potentially untrusted parties. We outline a research program and justify the need for more work at the intersection of ZK and formal verification research.
Expand

21 May 2024

Announcement Announcement
We are proud to announce the winners of the 2024 IACR Test-of-Time Award for Eurocrypt. The IACR Test-of-Time Award honors papers published at the 3 IACR flagship conferences 15 years ago which have had a lasting impact on the field. We are announcing the winners for each conference separately.

The Test-of-Time award for Eurocrypt 2009 is awarded to the following paper:

A Unified Framework for the Analysis of Side-Channel Key Recovery Attacks, by François-Xavier Standaert, Tal G. Malkin and Moti Yung.
For introducing a structured approach for evaluation of side-channel attacks and countermeasures and for inspiring further connections between the theory of leakage-resilient cryptography and the practice of defending implementations against side-channels attacks.

For more information, see https://www.iacr.org/testoftime.

Congratulations to the winners!
Expand
University of Amsterdam, The Netherlands
Job Posting Job Posting
Are you committed to research-based education? Do you want to teach the next generation of security and network engineers, and perform cutting-edge research in IT security and/or computer networks? The Multiscale Networked Systems (MNS) and Complex Cyber Infrastructure (CCI) research groups of the University of Amsterdam are looking for an Assistant Professor in distributed systems and software engineering, with a strong profile in research and teaching, and a strong commitment to academic education. See link for full description

Closing date for applications:

Contact: dr. Zhiming Zhao

More information: https://vacatures.uva.nl/UvA/job/Assistant-Professor-in-Security-and-Network-Engineering-%2850-teaching%29/794986702/

Expand
University of Birmingham, Birmingham, United Kingdom
Job Posting Job Posting

The School of Computer Science is looking to strengthen the existing Security and Privacy Research group. The available positions are linked to the chair in Applied Cryptography (held by Elisabeth Oswald): were are thus looking for applicants with a specific interest in applied cryptography in the context of hardware and embedded systems security (e.g. pre-silicon leakage and fault analysis, secure embedded software development, statistical side channel and fault evaluation techniques, machine and deep learning for side channels).

Further information about the duties that come with the role, as well as salary information, and information about the School, are available via the link provided. For further questions please contact Prof. Elisabeth Oswald in the first instance. The closing date for applications is June 16th 2024.

Closing date for applications:

Contact: Elisabeth Oswald m.e.oswald@bham.ac.uk

More information: https://edzz.fa.em3.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_6001/job/4653/?utm_medium=jobshare

Expand
Copper.co
Job Posting Job Posting
The Key Management team at Copper is tasked with developing the core components of Copper’s custodial solution. This entails the constructions of tools and services essential for secure storage and transfer of digital assets. A focal point of the team’s responsibilities includes developing and integrating MPC-based cryptographic signature algorithms.

Key Responsibilities of the role

  • You will conduct cryptography literature review, select relevant papers, and produce a specification and pseudocode from the paper(s). If situation demands it, you will produce original peer-reviewed research in close collaboration with academia.
  • You will implement cryptography specifications into secure, production-level code.
  • You will integrate cryptography primitives into Copper’s internal MPC framework.
  • You will participate in the design and implementation of Copper’s next generation MPC framework.
  • You will interact with auditors and academia, as well as cryptographers on the client side.
  • You will provide cryptography support for other Copper development teams.

    Your experience, skills and knowledge

    This role is right for you if:

  • You have a strong foundation in modern/provable cryptography.
  • You consider yourself an applied cryptographer who doesn’t shy away from implementing your constructions.
  • You are interested in a unique intersection between academia and industry wherein you will be working with both theory and real-world implementations.

    Essential

  • Research experience in modern cryptography -preferably in MPC, Threshold Cryptography, [Zero-Knowledge] proof systems etc- and its tools e.g. Proving your constructions secure in UC paradigm.
  • Engineering experience in implementation of modern cryptography primitives.
  • Fluency in a systems language like C, C++, Rust, or Go.
  • Familiarity with algorithms and aptitude in cryptographic system design.

    Desirable

  • PhD in Cryptography with research track record.
  • Familiarity/Experience with distributed systems.
  • Prior knowledge/experience with MPC or crypto custody.

    Closing date for applications:

    Contact:

    Clara Luna

    clara.luna@copper.co

    More information: https://www.linkedin.com/jobs/view/3922104775

  • Expand

    20 May 2024

    Youngjin Bae, Jung Hee Cheon, Jaehyung Kim, Damien Stehlé
    ePrint Report ePrint Report
    The Cheon-Kim-Kim-Song (CKKS) fully homomorphic encryption scheme is designed to efficiently perform computations on real numbers in an encrypted state. Recently, Drucker et al. [J. Cryptol.] proposed an efficient strategy to use CKKS in a black-box manner to perform computations on binary data.

    In this work, we introduce several CKKS bootstrapping algorithms designed specifically for ciphertexts encoding binary data. Crucially, the new CKKS bootstrapping algorithms enable to bootstrap ciphertexts containing the binary data in the most significant bits. First, this allows to decrease the moduli used in bootstrapping, saving a larger share of the modulus budget for non-bootstrapping operations.

    In particular, we obtain full-slot bootstrapping in ring degree $2^{14}$ for the first time. Second, the ciphertext format is compatible with the one used in the DM/CGGI fully homomorphic encryption schemes. Interestingly, we may combine our CKKS bootstrapping algorithms for bits with the fast ring packing technique from Bae et al. [CRYPTO'23]. This leads to a new bootstrapping algorithm for DM/CGGI that outperforms the state-of-the-art approaches when the number of bootstraps to be performed simultaneously is in the low hundreds.
    Expand
    Ziyi Guan, Artur Riazanov, Weiqiang Yuan
    ePrint Report ePrint Report
    A verifiable delay function (VDF) is a cryptographic primitive that takes a long time to compute, but produces a unique output that is efficiently and publicly verifiable.

    Mahmoody, Smith and Wu (ICALP 2020) prove that VDFs satisfying both perfect completeness and adaptive perfect uniqueness do not exist in the random oracle model. Moreover, Ephraim, Freitag, Komargodski, and Pass (EUROCRYPT 2020) construct a VDF with perfect completeness and computational uniqueness, a much weaker guarantee compare to perfect uniqueness, in the random oracle model under the repeated squaring assumption.

    In this work, we close the gap between existing constructions and known lower bounds by showing that VDFs with imperfect completeness and non-adaptive computational uniqueness cannot be constructed in the pure random oracle model (without additional computational assumptions).
    Expand
    Ashrujit Ghoshal, Baitian Li, Yaohua Ma, Chenxin Dai, Elaine Shi
    ePrint Report ePrint Report
    We propose a new unified framework to construct multi-server, information-theoretic Private Information Retrieval (PIR) schemes that leverage global preprocesing to achieve sublinear computation per query. Despite a couple earlier attempts, our understanding of PIR schemes in the global preprocessing model remains limited, and so far, we only know a few sparse points in the broad design space. With our new unified framework, we can generalize the results of Beimel, Ishai, and Malkin to broader parameter regimes, thus enabling a tradeoff between bandwidth and computation. Specifically, for any constant $S > 1$, we can get an $S$-server scheme whose bandwidth consumption is as small as $n^{1/(S+1) + \epsilon}$ while achieving computation in the $n^\delta$ regime for some constant $\delta \in (0, 1)$. Moreover, we can get a scheme with polylogarithmic bandwidth and computation, requiring only polylogarithmic number of servers.
    Expand
    Ky Nguyen, David Pointcheval, Robert Schädlich
    ePrint Report ePrint Report
    Decentralized Multi-Client Functional Encryption (DMCFE) extends the basic functional encryption to multiple clients that do not trust each other. They can independently encrypt the multiple plaintext-inputs to be given for evaluation to the function embedded in the functional decryption key, defined by multiple parameter-inputs. And they keep control on these functions as they all have to contribute to the generation of the functional decryption keys. Tags can be used in the ciphertexts and the keys to specify which inputs can be combined together. As any encryption scheme, DMCFE provides privacy of the plaintexts. But the functions associated to the functional decryption keys might be sensitive too (e.g. a model in machine learning). The function-hiding property has thus been introduced to additionally protect the function evaluated during the decryption process.

    In this paper, we provide new proof techniques to analyze a new concrete construction of function-hiding DMCFE for inner products, with strong security guarantees in the random oracle model: the adversary can adaptively query multiple challenge ciphertexts and multiple challenge keys, with unbounded repetitions of the same message tags in the ciphertext-queries and a fixed polynomially-large number of repetitions of the same key tags in the key-queries, allowing static corruption of the secret encryption keys. Previous constructions were proven secure in the selective setting only.
    Expand
    Yu Morishima, Masahiro Kaminaga
    ePrint Report ePrint Report
    Provable security based on a robust mathematical framework is the gold standard for security evaluation in cryptography. Several provable secure cryptosystems have been studied for public key cryptography. However, provably secure symmetric-key cryptography has received little attention. Although there are known provably secure symmetric-key cryptosystems based on the hardness of factorization and discrete logarithm problems, they are not only slower than conventional block ciphers but can also be broken by quantum computers. Our study aims to tackle this latter problem by proposing a new provably secure Feistel cipher using collision resistant hash functions based on a Short Integer Solution problem (SIS). Even if cipher primitives are resistant to quantum algorithms, it is crucial to determine whether the cipher is resilient to differential cryptanalysis, a fundamental and powerful attack against symmetric-key cryptosystems. In this paper, we demonstrate that the proposed cipher family is secure against differential cryptanalysis by deriving an upper bound on the maximum differential probability. In addition, we demonstrate the potential success of differential cryptanalysis for short block sizes and statistically evaluate the average resistance of cipher instances based on differential characteristic probabilities. This method approximates the S-box output using a folded two-dimensional normal distribution and employs a generalized extreme value distribution. This evaluation method is first introduced in this paper and serves as the basis for studying the differential characteristics of lattice matrices and the number of secure rounds. This study is foundational research on differential cryptanalysis against block ciphers using a lattice matrix based on SIS.
    Expand
    Sora Suegami
    ePrint Report ePrint Report
    Witness encryption (WE) allows a ciphertext to be encrypted under an NP problem such that anyone holding a valid witness for that problem can decrypt it (flexible decryptors), without interaction with others (non-interaction). However, existing schemes are either impractical or achieve only a part of these WE features. We propose a novel WE scheme that 1) is based on bilinear maps such as pairings, 2) achieves the property of flexible decryptors, and 3) still requires the decryptor's communication with a trusted signer, who only performs a fixed amount of computation and communication at regular intervals, regardless of the number of ciphertexts. It provides extractable security and can be extended to a threshold multiple signers setting, avoiding reliance on a single signer. As a significant application of our WE scheme, we build a novel one-time program (OTP) scheme in which the signers' computational and communication costs remain constant, independent of the number of OTPs to be evaluated simultaneously. This feature ensures scalable OTP evaluations without risking decreased signer participation or compromised decentralization due to increased operational costs for the signers.
    Expand
    ◄ Previous Next ►