International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

02 July 2024

Mukul Kulkarni, Keita Xagawa
ePrint Report ePrint Report
NIST started the standardization of additional post-quantum signatures in 2022. Among 40 candidates, a few of them showed their stronger security than existential unforgeability, strong existential unforgeability and BUFF (beyond unforgeability features) securities. Recently, Aulbach, Düzlü, Meyer, Struck, and Weishäupl (PQCrypto 2024) examined the BUFF securities of 17 out of 40 candidates. Unfortunately, on the so-called MPC-in-the-Head (MPCitH) signature schemes, we have no knowledge of strong existential unforgeability and BUFF securities.

This paper studies the strong securities of all nine MPCitH signature candidates: AIMer, Biscuit, FAEST, MIRA, MiRitH, MQOM, PERK, RYDE, and SDitH.

We show that the MPCitH signature schemes are strongly existentially unforgeable under chosen message attacks in the (quantum) random oracle model. To do so, we introduce a new property of the underlying multi-pass identification, which we call _non-divergency_. This property can be considered as a weakened version of the computational unique response for three-pass identification defined by Kiltz, Lyubashevsky, and Schaffner (EUROCRYPT 2018) and its extension to multi-pass identification defined by Don, Fehr, and Majentz (CRYPTO 2020). In addition, we show that the SSH11 protocol proposed by Sakumoto, Shirai, and Hiwatari (CRYPTO 2011) is _not_ computational unique response, while Don et al. (CRYPTO 2020) claimed it.

We also survey BUFF securities of the nine MPCitH candidates in the quantum random oracle model. In particular, we show that Biscuit and MiRitH do not have some of the BUFF security.
Expand
Shahriar Ebrahimi, Parisa Hassanizadeh
ePrint Report ePrint Report
Remote attestation (RA) protocols have been widely used to evaluate the integrity of software on remote devices. Currently, the state-of-the-art RA protocols lack a crucial feature: transparency. This means that the details of the final attestation verification are not openly accessible or verifiable by the public. Furthermore, the interactivity of these protocols often limits attestation to trusted parties who possess privileged access to confidential device data, such as pre-shared keys and initial measurements. These constraints impede the widespread adoption of these protocols in various applications. In this paper, we introduce zRA, a non-interactive, transparent, and publicly provable RA protocol based on zkSNARKs. zRA enables verification of device attestations without the need for pre-shared keys or access to confidential data, ensuring a trustless and open attestation process. This eliminates the reliance on online services or secure storage on the verifier side. Moreover, zRA does not impose any additional security assumptions beyond the fundamental cryptographic schemes and the essential trust anchor components on the prover side (i.e., ROM and MPU). The zero-knowledge attestation proofs generated by devices have constant size regardless of the network complexity and number of attestations. Moreover, these proofs do not reveal sensitive information regarding internal states of the device, allowing verification by anyone in a public and auditable manner. We conduct an extensive security analysis and demonstrate scalability of zRA compared to prior work. Our analysis suggests that zRA excels especially in peer-to-peer and Pub/Sub network structures. To validate the practicality, we implement an open-source prototype of zRA using the Circom language. We show that zRA can be securely deployed on public permissionless blockchains, serving as an archival platform for attestation data to achieve resilience against DoS attacks.
Expand
Guofeng Tang, Bo Pang, Long Chen, Zhenfeng Zhang
ePrint Report ePrint Report
A threshold signature scheme distributes the ability to generate signatures through distributed key generation and signing protocols. A threshold signature scheme should be functionally interchangeable, meaning that a signature produced by a threshold scheme should be verifiable by the same algorithm used for non-threshold signatures. To resist future attacks from quantum adversaries, lattice-based threshold signatures are desirable. However, the performance of existing lattice-based threshold signing protocols is still far from practical.

This paper presents the first lattice-based $t$-out-of-$n$ threshold signature scheme with functional interchangeability that has been implemented. To build an $t$-out-of-$n$ access structure for arbitrary $t \leq n$, we first present a novel $t$-out-of-$n$ version of the SPDZ MPC protocol. We avoid using the MPC protocol to evaluate hash operations for high concrete efficiency. Moreover, we design an efficient distributed rejection sampling protocol. Consequently, the online phase of our distributed signing protocol takes only 0.5 seconds in the two-party setting and 7.3 seconds in the 12-party setting according to our implementation. As a byproduct, our scheme also presents a periodic key refreshment mechanism and offers proactive security.
Expand
Trisha Datta, Binyi Chen, Dan Boneh
ePrint Report ePrint Report
Verifying image provenance has become an important topic, especially in the realm of news media. To address this issue, the Coalition for Content Provenance and Authenticity (C2PA) developed a standard to verify image provenance that relies on digital signatures produced by cameras. However, photos are usually edited before being published, and a signature on an original photo cannot be verified given only the published edited image. In this work, we describe VerITAS, a system that uses zero-knowledge proofs (zk-SNARKs) to prove that only certain edits have been applied to a signed photo. While past work has created image editing proofs for photos, VerITAS is the first to do so for realistically large images (30 megapixels). Our key innovation enabling this leap is the design of a new proof system that enables proving knowledge of a valid signature on a large amount of witness data. We run experiments on realistically large images that are more than an order of magnitude larger than those tested in prior work. In the case of a computationally weak signer, such as a camera, we are able to generate proofs of valid edits for a 90 MB image in under an hour, costing about \$2.42 on AWS per image. In the case of a more powerful signer, we are able to generate proofs of valid edits for 90 MB images in under five minutes, costing about \$0.09 on AWS per image. Either way, proof verification time is about 2 seconds in the browser. Our techniques apply broadly whenever there is a need to prove that an efficient transformation was applied correctly to a large amount of signed private data.
Expand

01 July 2024

Truong Son Nguyen, Lun Wang, Evgenios M. Kornaropoulos, Ni Trieu
ePrint Report ePrint Report
Researchers across various fields seek to understand causal relationships but often find controlled experiments impractical. To address this, statistical tools for causal discovery from naturally observed data have become crucial. Non-linear regression models, such as Gaussian process regression, are commonly used in causal inference but have limitations due to high costs when adapted for secure computation. Support vector regression (SVR) offers an alternative but remains costly in an Multi-party computation context due to conditional branches and support vector updates.

In this paper, we propose Aitia, the first two-party secure computation protocol for bivariate causal discovery. The protocol is based on optimized multi-party computation design choices and is secure in the semi-honest setting. At the core of our approach is BSGD-SVR, a new non-linear regression algorithm designed for MPC applications, achieving both high accuracy and low computation and communication costs. Specifically, we reduce the training complexity of the non-linear regression model from approximately from $\mathcal{O}(N^3)$ to $\mathcal{O}(N^2)$ where $N$ is the number of training samples. We implement Aitia using CrypTen and assess its performance across various datasets. Empirical evaluations show a significant speedup of $3.6\times$ to $340\times$ compared to the baseline approach.
Expand
Zhou Zhang, Song Bian, Zian Zhao, Ran Mao, Haoyi Zhou, Jiafeng Hua, Yier Jin, Zhenyu Guan
ePrint Report ePrint Report
Fully homomorphic encryption (FHE) based database outsourcing is drawing growing research interests. At its current state, there exist two primary obstacles against FHE-based encrypted databases (EDBs): i) low data precision, and ii) high computational latency. To tackle the precision-performance dilemma, we introduce ArcEDB, a novel FHE-based SQL evaluation infrastructure that simultaneously achieves high data precision and fast query evaluation. Based on a set of new plaintext encoding schemes, we are able to execute arbitrary-precision ciphertext-to-ciphertext homomorphic comparison orders of magnitude faster than existing methods. Meanwhile, we propose efficient conversion algorithms between the encoding schemes to support highly composite SQL statements, including advanced filter-aggregation and multi-column synchronized sorting. We perform comprehensive experiments to study the performance characteristics of ArcEDB. In particular, we show that ArcEDB can be up to $57\times$ faster in homomorphic filtering and up to $20\times$ faster over end-to-end SQL queries when compared to the state-of-the-art FHE-based EDB solutions. Using ArcEDB, a SQL query over a 10K-row time-series EDB with 64-bit timestamps only runs for under one minute.
Expand

30 June 2024

Stefan Dziembowski, Shahriar Ebrahimi, Parisa Hassanizadeh
ePrint Report ePrint Report
With the rise of generative AI technology, the media's credibility as a source of truth has been significantly compromised. This highlights the need to verify the authenticity of media and its originality. Ensuring the integrity of media during capture using the device itself presents a straightforward solution to this challenge. However, raw captured media often require certain refinements or redactions before publication. Zero-knowledge proofs (ZKP) offer a solution by allowing attestation of the correctness of specific transformations applied to an authorized image. While shown to be feasible, previous approaches faced challenges in practice due to their high prover complexity.

In this paper, we aim to develop a practical framework for efficiently proving the authenticity of HD and 4K images on commodity hardware. Our goal is to minimize prover complexity by utilizing the folding-based zkSNARKs technique, resulting in VIMz, the first practical verifiable image manipulation system of this kind. VIMz leverages Nova's folding scheme to achieve low complexity recursive zkSNARK proofs of authentic image manipulation. Our implementation results demonstrate a substantial reduction in prover complexity—up to a 3$\times$ speedup in time and a 96$\times$ reduction in memory (from 309 GB in [Kang et al., arXiv 2022] to only 3.2 GB). Moreover, the low memory consumption allows VIMz to prove the correctness of multiple chained transformations simultaneously, further increasing the performance (up to 3.5$\times$). Additionally, we propose a trustless smart contract system that autonomously verifies the proofs of media authenticity, achieving trustless copyright and ownership management, aligning with the standards of the Coalition for Content Provenance and Authenticity (C2PA). Such a system serves as a foundational infrastructure for constructing trustless media marketplaces with diverse applications.
Expand
Chandan Kumar, Sikhar Patranabis, Debdeep Mukhopadhyay
ePrint Report ePrint Report
We present a variant of Function Secret Sharing (FSS) schemes tailored for point, comparison, and interval functions, featuring compact key sizes at the expense of additional comparison. While existing FSS constructions are primarily geared towards $2$-party scenarios, exceptions such as the work by Boyle et al. (Eurocrypt 2015) and Riposte (S&P 2015) have introduced FSS schemes for $p$-party scenarios ($p \geq 3$). This paper aims to achieve the most compact $p$-party FSS key size to date. We achieve a noteworthy reduction in key size, a $2^p$-factor decrease compared to state-of-the-art FSS constructions (including computationally efficient constructions using symmetric-key primitives) of distributed point function (DPF). Compared to the previous public-key-based FSS design for DPF, we also get a key size reduction equal to a $2^{n/2}$-sized row vector, where $2^n$ is the domain size of the point function. This reduction in key size comes at the cost of a required comparison operation by the decoder (hence called a non-linear decoder), a departure from prior schemes. In $p$-party scenarios, our construction outperforms existing FSS constructions in key size, remaining on par with Riposte in evaluation time and showing significant improvement over Boyle et al.

In addition to constructing FSS for distributed point functions (DPF), we extend our approach to distributed comparison and interval functions, achieving the most efficient key size to date. Our distributed comparison function exhibits a key-size reduction by a factor of $q^{p-1}$, where $q$ denotes the size of the algebraic group used in the scheme's construction. The reduced key size of the comparison function has practical implications, particularly in applications like privacy-preserving machine learning (PPML), where thousands of comparison functions are employed in each neural network layer. To demonstrate the effectiveness of our improvements, we design and prototype-implement a scalable privacy-preserving framework for neural networks over distributed models. Specifically, we implement a distributed rectified linear unit (ReLU) activation function using our distributed comparison function, showcasing the efficacy of our proposed scheme.
Expand
Sanjam Garg, Aniket Kate, Pratyay Mukherjee, Rohit Sinha, Sriram Sridhar
ePrint Report ePrint Report
We develop a distributed service for generating correlated randomness (e.g. permutations) for multiple parties, where each party’s output is private but publicly verifiable. This service provides users with a low-cost way to play online poker in real-time, without a trusted party. Our service is backed by a committee of compute providers, who run a multi-party computation (MPC) protocol to produce an (identity-based) encrypted permutation of a deck of cards, in an offline phase well ahead of when the players’ identities are known. When the players join, what we call the online phase, they decrypt their designated cards immediately after deriving the identity-based decryption keys, a much simpler computation. In addition, the MPC protocol also generates a publicly-verifiable proof that the output is a permutation. In our construction, we introduce a new notion of succinctly verifiable multi-identity based encryption (SVME), which extends the existing notion of verifiable encryption to a multi-identity-based setting, but with a constant sized proof – this may be of independent interest. We instantiate this for a permutation relation (defined over a small set) along with identity-based encryption, polynomial commitments and succinct proofs – our choices are made to enable a distributed computation when the card deck is always secret shared. Moreover, we design a new protocol to efficiently generate a secret-sharing of random permutation of a small set, which is run prior to distributed SVME. Running these protocols offline simplifies the online phase substantially, as parties only derive their identity-specific keys privately via secure channels with the MPC committee, and then decrypt locally to obtain their decks. We provide a rigorous UC-based formalization in a highly modularized fashion. Finally, we demonstrate practicality with an implementation that shows that for 8 MPC parties, gen- erating a secret publicly-verifiable permutation of 64 cards takes under 3 seconds, while accessing cards for a player takes under 0.3 seconds.
Expand
Joseph Johnston
ePrint Report ePrint Report
Interactive proofs and arguments of knowledge can be generalized to the concept of interactive reductions of knowledge, where proving knowledge of a witness for one NP language is reduced to proving knowledge of a witness for another NP language. We take this generalization and specialize it to a class of reductions we refer to as `quirky interactive reductions of knowledge' (or QUIRKs). This name reflects our particular design choices within the broad and diverse world of interactive reduction methods. A central design choice is allowing the prover to rewind or regress to any previous reduction and repeat it as many times as desired. We prove completeness and extractability properties for QUIRKs. We also offer tools for constructing extraction algorithms along with several simple examples of usage.
Expand
Jonathan Takeshita, Nirajan Koirala, Colin McKechney, Taeho Jung
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) allows computation on encrypted data. Various software libraries have implemented the approximate- arithmetic FHE scheme CKKS, which is highly useful for applications in machine learning and data analytics; each of these libraries have differing performance and features. It is useful for developers and researchers to learn details about these libraries’ performance and their differences. Some previous work has profiled FHE and CKKS implementations for this purpose, but these comparisons are limited in their fairness and completeness.

In this article, we compare four major libraries supporting the CKKS scheme. Working with the maintainers of each of the PALISADE, Microsoft SEAL, HElib, and HEAAN libraries, we devise methods for fair comparisons of these libraries, even with their widely varied development strategies and library architectures. To show the practical performance of these libraries, we present HEProfiler, a simple and extensible framework for profiling C++ FHE libraries. Our experimental evaluation is complete in both the scope of tasks tested and metrics evaluated, allowing us to draw conclusions about the behaviors of different libraries under a wide range of real-world workloads. This is the first work-giving experimental comparisons of different bootstrapping-capable CKKS libraries.
Expand
Matteo Campanelli, Dario Fiore, Rosario Gennaro
ePrint Report ePrint Report
Lookup arguments allow an untrusted prover to commit to a vector $\vec f \in \mathbb{F}^n$ and show that its entries reside in a predetermined table $\vec t \in \mathbb{F}^N$. One of their key applications is to augment general-purpose SNARKs making them more efficient on subcomputations that are hard to arithmetize. In order for this "augmentation" to work out, a SNARK and a lookup argument should have some basic level of compatibility with respect to the commitment on $\vec f$. However, not all existing efficient lookup arguments are fully compatible with other efficient general-purpose SNARKs. This incompatibility can for example occur whenever SNARKs use multilinear extensions under the hood (e.g. Spartan) but the lookup argument is univariate in flavor (e.g. Caulk or $\mathsf{cq}$).

In this paper we discuss how to widen the spectrum of "super-efficient" lookup arguments (where the proving time is independent of the size of the lookup table): we present a new construction inspired by $\mathsf{cq}$and based on multilinear polynomial encodings (MLE). Our construction is the first lookup argument for any table that is also natively compatible with MLE-based SNARKs at comparable costs with other state-of-the-art lookup arguments, particularly when the large table is unstructured. This case arises in various applications, such as using lookups to prove that the program in a virtual machine is fetching the right instruction and when proving the correct computation of floating point arithmetic (e.g., in verifiable machine learning).

We also introduce a second more general construction: a compiler that, given any super-efficient lookup argument compatible with univariate SNARKs, converts it into a lookup argument compatible with MLE-based SNARKs with a very small overhead. Finally, we discuss SNARKs that we can compose with our constructions as well as approaches for this composition to work effectively.
Expand
Kristian Gjøsteen
ePrint Report ePrint Report
We analyse a two password-authenticated key exchange protocols, a variant of CPace and a protocol related to the well-known SRP protocol. Our security results are tight. The first result gives us some information about trade-offs for design choices in CPace. The second result provides information about the security of SRP.

Our analysis is done in a new game-based security definition for password-authenticated key exchange. Our definition accomodates arbitrary password sampling methodologies. Our definition also supports modular security analysis, which we illustrate by giving two example applications of password-authenticated key exchange: password-authenticated secure channels and password-authenticated device authorisation, capturing popular applications of passwords.
Expand
Behzad Abdolmaleki, Prastudy Fauzi, Toomas Krips, Janno Siim
ePrint Report ePrint Report
Zero-knowledge shuffle arguments are a useful tool for constructing mix-nets which enable anonymous communication. We propose a new shuffle argument using a novel technique that probabilistically checks that each weighted set of input elements corresponds to some weighted set of output elements, with weights from the same set as the input element weights. We achieve this using standard discrete log assumptions and the shortest integer solution (SIS) assumption. Our shuffle argument has prover and verifier complexity linear in the size of the shuffled set, and communication complexity logarithmic both in the shuffled set size and security parameter.
Expand
Ahmet Ramazan Ağırtaş, Neslihan Yaman Gökce, Oğuz Yayla
ePrint Report ePrint Report
An aggregate signature scheme is a digital signature protocol that enables the aggregation of multiple signatures. Given n signatures on n distinct messages from n different users, it is possible to combine all these signatures into a single, concise signature. This single signature, along with the n original messages, convinces the verifier that the n users indeed signed their respective n original messages. However, the verifier must have access to all the original messages to perform the verification, highlighting a potential limitation in terms of accessibility and efficiency. Goyal and Vaikuntanathan introduced the concept of local verification, which allows the verifier to determine if a specific message m is part of the aggregated signature by only accessing the message m. In this paper, we extend the single-signer locally verifiable aggregate signature scheme initially proposed by Goyal and Vaikuntanathan, adapting it to a multi-signer context. Our generalization allows the verifier to validate multiple signatures simultaneously using an auxiliary value generated by the LocalOpen algorithm, thereby enhancing verification efficiency. Furthermore, we integrate this approach into the multi-signature scheme proposed by Boneh, Drijvers, and Neven, demonstrating its broader applicability and potential benefits in complex cryptographic systems.
Expand
Jonas Lindstrøm, Kostas Kryptos Chalkias
ePrint Report ePrint Report
The Jacobi Symbol is an essential primitive in cryptographic applications such as primality testing, integer factorization, and various encryption schemes. By exploring the interdependencies among modular reductions within the algorithmic loop, we have developed a refined method that significantly enhances computational efficiency. Our optimized algorithm, implemented in the Rust language, achieves a performance increase of 72% over conventional textbook methods and is twice as fast as the previously fastest known Rust implementation.

This work not only provides a detailed analysis of the optimizations but also includes comprehensive benchmark comparisons to illustrate the practical advantages of our methods. Our algorithm is publicly available under an open-source license, promoting further research on foundational cryptographic optimizations.
Expand
Benny Applebaum, Eliran Kachlon
ePrint Report ePrint Report
The problem of minimizing the share size of threshold secret-sharing schemes is a basic research question that has been extensively studied. Ideally, one strives for schemes in which the share size equals the secret size. While this is achievable for large secrets (Shamir, CACM '79), no similar solutions are known for the case of binary, single-bit secrets. Current approaches often rely on so-called ramp secret sharing that achieves a constant share size at the expense of a slight gap between the privacy and the correctness thresholds. In the case of single-bit shares, this leads to a large gap which is typically unacceptable. The possibility of a meaningful notion of secret sharing scheme with 1-bit shares and almost optimal threshold has been left wide open. Of special interest is the case of threshold 0.5, which is motivated by information-theoretic honest-majority secure multiparty computation (MPC).

In this work, we present a new stochastic model for secret-sharing where each party is corrupted by the adversary with probability $p$, independently of the other parties, and correctness and privacy are required to hold with high probability over the choice of the corrupt parties. We present new secret sharing schemes with single-bit shares that tolerate any constant corruption probability $p<0.5$. Our construction is based on a novel connection between such stochastic secret-sharing schemes and error-correcting codes that achieve capacity over the binary erasure channel.

Our schemes are linear and multiplicative. We demonstrate the usefulness of the model by using our new schemes to construct MPC protocols with security against an adversary that passively corrupts an arbitrary subset of $0.499n$ of the parties, where the online communication per party consists of a single bit per AND gate and zero communication per XOR gate. Unlike competing approaches for communication-efficient MPC, our solution is applicable even in a real-time model in which the parties should compute a Boolean circuit whose gates arrive in real-time, one at a time, and are not known in advance.
Expand
Deokhwa Hong, Young-Sik Kim, Yongwoo Lee, Eunyoung Seo
ePrint Report ePrint Report
Fully homomorphic encryption (FHE) schemes enable computations on encrypted data, making them a crucial component of privacy-enhancing technologies. Ducas and Micciancio introduced FHEW (Eurocrypt '15), and Chillotti et al. improved it in TFHE (Asiacrypt '16), both of which provide homomorphic binary (or larger) gate evaluations with fast latency due to their small parameters. However, their evaluation failure probability is highly sensitive to parameter selection, resulting in a limited set of viable parameters and a trade-off between failure probability and runtime.

Recently, Cheon et al. proposed a key recovery attack against FHEW/TFHE schemes based on a new security model for FHE, called IND-CPA-D security, which was first introduced by Li and Micciancio (Eurocrypt '21). To prevent this attack, it is necessary to make the failure probability negligible (e.g., $2^{-128}$). However, due to limited choice parameters, it is forced to use a parameter set with unnecessarily low failure probabilities than needed, causing inefficiencies in runtime.

We propose a new bootstrapping method for FHEW/TFHE, providing a precise balance between runtime and failure probability, and easy to implement. The proposed methods enable the selection of parameter sets that achieve negligible failure probabilities for each desired security level while optimizing runtime.
Expand
Xiangyu Liu, Tzannetos Ioannis, Vassilis Zikas
ePrint Report ePrint Report
An adaptor signatures (AS) scheme is an extension of digital signatures that allows the signer to generate a pre-signature for an instance of a hard relation. This pre-signature can later be adapted to a full signature with a corresponding witness. Meanwhile, the signer can extract a witness from both the pre-signature and the signature. AS have recently garnered more attention due to its scalability and interoperability. Dai et al. [INDOCRYPT 2022] proved that AS can be constructed for any NP relation using a generic construction. However, their construction has a shortcoming: the associated witness is exposed by the adapted signature. This flaw poses limits the applications of AS, even in its motivating setting, i.e., blockchain, where the adapted signature is typically uploaded to the blockchain and is public to everyone.

To address this issue, in this work we augment the security definition of AS by a natural property which we call witness hiding. We then prove the existence of AS for any NP relation, assuming the existence of one-way functions. Concretely, we propose a generic construction of witness-hiding AS from signatures and a weak variant of trapdoor commitments, which we term trapdoor commitments with a specific adaptable message. We instantiate the latter based on the Hamiltonian cycle problem. Since the Hamiltonian cycle problem is NP-complete, we can obtain witness hiding adaptor signatures for any NP relation.
Expand
Jiaxin Guan, Hart Montgomery
ePrint Report ePrint Report
A sequential function is, informally speaking, a function $f$ for which a massively parallel adversary cannot compute "substantially" faster than an honest user with limited parallel computation power. Sequential functions form the backbone of many primitives that are extensively used in blockchains such as verifiable delay functions (VDFs) and time-lock puzzles. Despite this widespread practical use, there has been little work studying the complexity or theory of sequential functions.

Our main result is a black-box oracle separation between sequential functions and one-way functions: in particular, we show the existence of an oracle $\mathcal{O}$ that implies a sequential function but not a one-way function. This seems surprising since sequential functions are typically constructed from very strong assumptions that imply one-way functions and also since time-lock puzzles are known to imply one-way functions (Bitansky et al., ITCS '16).

We continue our exploration of the theory of sequential functions. We show that, informally speaking, the decisional, worst-case variant of a certain class of sequential function called a continuous iterative sequential function (CISF) is PSPACE-complete. A CISF is, in a nutshell, a sequential function $f$ that can be written in the form $f \left(k, x \right) = g^{k} \left(x \right)$ for some function $g$ where $k$ is an input determining the number of "rounds" the function is evaluated. We then show that more general forms of sequential functions are not contained in PSPACE relative to a random oracle.

Given these results, we then ask if it is possible to build any interesting cryptographic primitives from sequential functions that are not one-way. It turns out that even if we assume just the existence of a CISF that is not one-way, we can build certain "fine-grained" cryptographic primitives where security is defined similarly to traditional primitives with the exception that it is only guaranteed for some (generally polynomial) amount of time. In particular, we show how to build "fine-grained" symmetric key encryption and "fine-grained" MACs from a CISF. We also show how to build fine-grained public-key encryption from a VDF with a few extra natural properties and indistinguishability obfucsation (iO) for null circuits. We do not assume one-way functions. Finally, we define a primitive that we call a commutative sequential function - essentially a sequential function that can be computed in sequence to get the same output in two different ways - and show that it implies fine-grained key exchange.
Expand
◄ Previous Next ►