International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

07 April 2023

Nico Döttling, Phillip Gajland, Giulio Malavolta
ePrint Report ePrint Report
Laconic function evaluation (LFE) allows Alice to compress a large circuit $\mathbf{C}$ into a small digest $\mathsf{d}$. Given Alice's digest, Bob can encrypt some input $x$ under $\mathsf{d}$ in a way that enables Alice to recover $\mathbf{C}(x)$, without learning anything beyond that. The scheme is said to be $laconic$ if the size of $\mathsf{d}$, the runtime of the encryption algorithm, and the size of the ciphertext are all sublinear in the size of $\mathbf{C}$. Until now, all known LFE constructions have ciphertexts whose size depends on the $depth$ of the circuit $\mathbf{C}$, akin to the limitation of $levelled$ homomorphic encryption. In this work we close this gap and present the first LFE scheme (for Turing machines) with asymptotically optimal parameters. Our scheme assumes the existence of indistinguishability obfuscation and somewhere statistically binding hash functions.

As further contributions, we show how our scheme enables a wide range of new applications, including two previously unknown constructions: • Non-interactive zero-knowledge (NIZK) proofs with optimal prover complexity. • Witness encryption and attribute-based encryption (ABE) for Turing machines from falsifiable assumptions.
Expand
Marshall Ball, Hanjun Li, Huijia Lin, Tianren Liu
ePrint Report ePrint Report
The beautiful work of Applebaum, Ishai, and Kushilevitz [FOCS'11] initiated the study of arithmetic variants of Yao's garbled circuits. An arithmetic garbling scheme is an efficient transformation that converts an arithmetic circuit $C: \mathcal{R}^n \rightarrow \mathcal{R}^m$ over a ring $\mathcal{R}$ into a garbled circuit $\widehat C$ and $n$ affine functions $L_i$ for $i \in [n]$, such that $\widehat C$ and $L_i(x_i)$ reveals only the output $C(x)$ and no other information of $x$. AIK presented the first arithmetic garbling scheme supporting computation over integers from a bounded (possibly exponentially large) range, based on Learning With Errors (LWE). In contrast, converting $C$ into a Boolean circuit and applying Yao's garbled circuit treats the inputs as bit strings instead of ring elements, and hence is not "arithmetic".

In this work, we present new ways to garble arithmetic circuits, which improve the state-of-the-art on efficiency, modularity, and functionality. To measure efficiency, we define the rate of a garbling scheme as the maximal ratio between the bit-length of the garbled circuit $|\widehat C|$ and that of the computation tableau $|C|\ell$ in the clear, where $\ell$ is the bit length of wire values (e.g., Yao's garbled circuit has rate $O(\lambda)$). $\bullet$ We present the first constant-rate arithmetic garbled circuit for computation over large integers based on the Decisional Composite Residuosity (DCR) assumption, significantly improving the efficiency of the schemes of Applebaum, Ishai, and Kushilevitz. $\bullet$ We construct an arithmetic garbling scheme for modular computation over $\mathcal{R} = \mathbb{Z}_p$ for any integer modulus $p$, based on either DCR or LWE. The DCR-based instantiation achieves rate $O(\lambda)$ for large $p$. Furthermore, our construction is modular and makes black-box use of the underlying ring and a simple key extension gadget. $\bullet$ We describe a variant of the first scheme supporting arithmetic circuits over bounded integers that are augmented with Boolean computation (e.g., truncation of an integer value, and comparison between two values), while keeping the constant rate when garbling the arithmetic part.

To the best of our knowledge, constant-rate (Boolean or arithmetic) garbling was only achieved before using the powerful primitive of indistinguishability obfuscation, or for restricted circuits with small depth.
Expand
Giulio Malavolta, Michael Walter
ePrint Report ePrint Report
Quantum key distribution (QKD) allows Alice and Bob to agree on a shared secret key, while communicating over a public (untrusted) quantum channel. Compared to classical key exchange, it has two main advantages:

(i)The key is unconditionally hidden to the eyes of any attacker, and

(ii) its security assumes only the existence of authenticated classical channels which, in practice, can be realized using Minicrypt assumptions, such as the existence of digital signatures.

On the flip side, QKD protocols typically require multiple rounds of interactions, whereas classical key exchange can be realized with the minimal amount of two messages. A long-standing open question is whether QKD requires more rounds of interaction than classical key exchange.

In this work, we propose a two-message QKD protocol that satisfies everlasting security, assuming only the existence of quantum-secure one-way functions. That is, the shared key is unconditionally hidden, provided computational assumptions hold during the protocol execution. Our result follows from a new quantum cryptographic primitive that we introduce in this work: the quantum-public-key one-time pad, a public-key analogue of the well-known one-time pad.
Expand
Andreas Brüggemann, Robin Hundt, Thomas Schneider, Ajith Suresh, Hossein Yalame
ePrint Report ePrint Report
The concept of using Lookup Tables (LUTs) instead of Boolean circuits is well-known and been widely applied in a variety of applications, including FPGAs, image processing, and database management systems. In cryptography, using such LUTs instead of conventional gates like AND and XOR results in more compact circuits and has been shown to substantially improve online performance when evaluated with secure multi-party computation. Several recent works on secure floating-point computations and privacy-preserving machine learning inference rely heavily on existing LUT techniques. However, they suffer from either large overhead in the setup phase or subpar online performance.

We propose FLUTE, a novel protocol for secure LUT evaluation with good setup and online performance. In a two-party setting, we show that FLUTE matches or even outperforms the online performance of all prior approaches, while being competitive in terms of overall performance with the best prior LUT protocols. In addition, we provide an open-source implementation of FLUTE written in the Rust programming language, and implementations of the Boolean secure two-party computation protocols of ABY2.0 and silent OT. We find that FLUTE outperforms the state of the art by two orders of magnitude in the online phase while retaining similar overall communication.
Expand
Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Francois Garillot, Jonas Lindstrom, Ben Riva, Arnab Roy, Alberto Sonnino, Pun Waiwitlikhit, Joy Wang
ePrint Report ePrint Report
We propose a variant of the original Boneh, Drijvers, and Neven (Asiacrypt '18) BLS multi-signature aggregation scheme best suited to applications where the full set of potential signers is fixed and known and any subset $I$ of this group can create a multi-signature over a message $m$. This setup is very common in proof-of-stake blockchains where a $2f+1$ majority of $3f$ validators sign transactions and/or blocks and is secure against $\textit{rogue-key}$ attacks without requiring a proof of key possession mechanism.

In our scheme, instead of randomizing the aggregated signatures, we have a one-time randomization phase of the public keys: each public key is replaced by a sticky randomized version (for which each participant can still compute the derived private key). The main benefit compared to the original Boneh at al. approach is that since our randomization process happens only once and not per signature we can have significant savings during aggregation and verification. Specifically, for a subset $I$ of $t$ signers, we save $t$ exponentiations in $\mathbb{G}_2$ at aggregation and $t$ exponentiations in $\mathbb{G}_1$ at verification or vice versa, depending on which BLS mode we prefer: $\textit{minPK}$ (public keys in $\mathbb{G}_1$) or $\textit{minSig}$ (signatures in $\mathbb{G}_1$).

Interestingly, our security proof requires a significant departure from the co-CDH based proof of Boneh at al. When $n$ (size of the universal set of signers) is small, we prove our protocol secure in the Algebraic Group and Random Oracle models based on the Discrete Log problem. For larger $n$, our proof also requires the Random Modular Subset Sum (RMSS) problem.
Expand
Sergey Agievich
ePrint Report ePrint Report
Using the representation of bent functions by bent rectangles, that is, special matrices with restrictions on columns and rows, we obtain an upper bound on the number of bent functions that improves previously known bounds in a practical range of dimensions. The core of our method is the following fact based on the recent observation by Potapov (arXiv:2107.14583): a 2-row bent rectangle is completely defined by one of its rows and the remaining values in slightly more than half of the columns.
Expand
Xichao Hu, Yongqiang Li, Lin Jiao, Zhengbin Liu, Mingsheng Wang
ePrint Report ePrint Report
Zero-correlation linear attack is a powerful attack of block ciphers, the lower number of rounds (LNR) which no its distinguisher (named zero-correlation linear approximation, ZCLA) exists reflects the ability of a block cipher against the zero-correlation linear attack. However, due to the large search space, showing there are no ZCLAs exist for a given block cipher under a certain number of rounds is a very hard task. Thus, present works can only prove there no ZCLAs exist in a small search space, such as 1-bit/nibble/word input and output active ZCLAs, which still exist very large gaps to show no ZCLAs exist in the whole search space.

In this paper, we propose the meet-in-the-middle method and double-collision method to show there no ZCLAs exist in the whole search space. The basic ideas of those two methods are very simple, but they work very effectively. As a result, we apply those two methods to AES, Midori64, and ARIA, and show that there no ZCLAs exist for $5$-round AES without the last Mix-Column layer, $7$-round Midori64 without the last Mix-Column layer, and $5$-round ARIA without the last linear layer.

As far as we know, our method is the first automatic method that can be used to show there no ZCLAs exist in the whole search space, which can provide sufficient evidence to show the security of a block cipher against the zero-correlation linear attack in the distinguishers aspect, this feature is very useful for designing block ciphers.
Expand

05 April 2023

Agnese Gini, Pierrick Méaux
ePrint Report ePrint Report
In this article we study the Algebraic Immunity (AI) of Weightwise Perfectly Balanced (WPB) functions. After showing a lower bound on the AI of two classes of WPB functions from the previous literature, we prove that the minimal AI of a WPB $n$-variables function is constant, equal to $2$ for $n\ge 4$ . Then, we compute the distribution of the AI of WPB function in $4$ variables, and estimate the one in $8$ and $16$ variables. For these values of $n$ we observe that a large majority of WPB functions have optimal AI, and that we could not obtain an AI-$2$ WPB function by sampling at random. Finally, we address the problem of constructing WPB functions with bounded algebraic immunity, exploiting a construction from 2022 by Gini and Méaux. In particular, we present a method to generate multiple WPB functions with minimal AI, and we prove that the WPB functions with high nonlinearity exhibited by Gini and Méaux also have minimal AI. We conclude with a construction giving WPB functions with lower bounded AI, and give as example a family with all elements with AI at least $n/2-\log(n)+1$.
Expand
Quang Dao, Paul Grubbs
ePrint Report ePrint Report
Increasing deployment of advanced zero-knowledge proof systems, especially zkSNARKs, has raised critical questions about their security against real-world attacks. Two classes of attacks of concern in practice are adaptive soundness attacks, where an attacker can prove false statements by choosing its public input after generating a proof, and malleability attacks, where an attacker can use a valid proof to create another valid proof it could not have created itself. Prior work has shown that simulation-extractability (SIM-EXT), a strong notion of security for proof systems, rules out these attacks. In this paper, we prove that two transparent, discrete-log-based zkSNARKs, Spartan and Bulletproofs, are simulation-extractable (SIM-EXT) in the random oracle model if the discrete logarithm assumption holds in the underlying group. Since these assumptions are required to prove standard security properties for Spartan and Bulletproofs, our results show that SIM-EXT is, surprisingly, "for free" with these schemes. Our result is the first SIM-EXT proof for Spartan and encompasses both linear- and sublinear-verifier variants. Our result for Bulletproofs encompasses both the aggregate range proof and arithmetic circuit variants, and is the first to not rely on the algebraic group model (AGM), resolving an open question posed by Ganesh et al. (EUROCRYPT '22). As part of our analysis, we develop a generalization of the tree-builder extraction theorem of Attema et al. (TCC '22), which may be of independent interest.
Expand
yufan jiang, Yong Li
ePrint Report ePrint Report
Tremendous efforts have been made to improve the efficiency of secure Multi-Party Computation (MPC), which allows n ≥ 2 parties to jointly evaluate a target function without leaking their own private inputs. It has been confirmed by previous researchers that 3-Party Computation (3PC) and outsourcing computations to GPUs can lead to huge performance improvement of MPC in computationally intensive tasks such as Privacy-Preserving Machine Learning (PPML). A natural question to ask is whether super-linear performance gain is possible for a linear increase in resources. In this paper, we give an affirmative answer to this question. We propose Force, an extremely efficient 4PC system for PPML. To the best of our knowledge, each party in Force enjoys the least number of local computations and lowest data exchanges between parties. This is achieved by introducing a new sharing type X -share along with MPC protocols in privacy-preserving training and inference that are semi-honest secure with an honest-majority. Our contribution does not stop at theory. We also propose engineering optimizations and verify the high performance of the protocols with implementation and experiments. By comparing the results with state-of-the-art researches such as Cheetah, Piranha, CryptGPU and CrypTen, we showcase that Force is sound and extremely efficient, as it can improve the PPML performance by a factor of 2 to 1200 compared with other latest 2PC, 3PC and 4PC system
Expand
Carlos Aguilar-Melchor, Martin R. Albrecht, Thomas Bailleux, Nina Bindel, James Howe, Andreas Hülsing, David Joseph, Marc Manzano
ePrint Report ePrint Report
We revisit batch signatures (previously considered in a draft RFC, and used in multiple recent works), where a single, potentially expensive, "inner" digital signature authenticates a Merkle tree constructed from many messages. We formalise a construction and prove its unforgeability and privacy properties.

We also show that batch signing allows us to scale slow signing algorithms, such as those recently selected for standardisation as part of NIST's post-quantum project, to high throughput, with a mild increase in latency. For the example of Falcon-512 in TLS, we can increase the amount of connections per second by a factor 3.2x, at the cost of an increase in the signature size by ~14% and the median latency by ~25%, where both are ran on the same 30 core server.

We also discuss applications where batch signatures allow us to increase throughput and to save bandwidth. For example, again for Falcon-512, once one batch signature is available, the additional bandwidth for each of the remaining N-1 is only 82 bytes.
Expand
Samuel Bedassa Alemu, Julia Kastner
ePrint Report ePrint Report
Blind signatures were originally introduced by Chaum (CRYPTO ’82) in the context of privacy-preserving electronic payment systems. Nowadays, the cryptographic primitive has also found applications in anonymous credentials and voting systems. However, many practical blind signature schemes have only been analysed in the game-based setting where a single signer is present. This is somewhat unsatisfactory as blind signatures are intended to be deployed in a setting with many signers. We address this in the following ways: – We formalise two variants of one-more-unforgeability of blind signatures in the Multi-Signer Setting. – We show that one-more-unforgeability in the Single-Signer Setting translates straightforwardly to the Multi-Signer Setting with a reduction loss proportional to the number of signers. – We identify a class of blind signature schemes which we call Key-Convertible where this reduction loss can be traded for an increased number of signing sessions in the Single-Signer Setting and show that many practical blind signature schemes such as blind BLS, blind Schnorr, blind Okamoto-Schnorr as well as two pairing-free, ROS immune schemes by Tessaro and Zhu (Eurocrypt’22) fulfil this property. – We further describe how the notion of key substitution attacks (Menezes and Smart, DCC’04) can be translated to blind signatures and provide a generic transformation of how they can be avoided.
Expand
Fuyuki Kitagawa, Tomoyuki Morimae, Ryo Nishimaki, Takashi Yamakawa
ePrint Report ePrint Report
We construct quantum public-key encryption from one-way functions. In our construction, public keys are quantum, but ciphertexts are classical. Quantum public-key encryption from one-way functions (or weaker primitives such as pseudorandom function-like states) are also proposed in some recent works [Morimae-Yamakawa, eprint:2022/1336; Coladangelo, eprint:2023/282; Grilo-Sattath-Vu, eprint:2023/345; Barooti-Malavolta-Walter, eprint:2023/306]. However, they have a huge drawback: they are secure only when quantum public keys can be transmitted to the sender (who runs the encryption algorithm) without being tampered with by the adversary, which seems to require unsatisfactory physical setup assumptions such as secure quantum channels. Our construction is free from such a drawback: it guarantees the secrecy of the encrypted messages even if we assume only unauthenticated quantum channels. Thus, the encryption is done with adversarially tampered quantum public keys. Our construction based only on one-way functions is the first quantum public-key encryption that achieves the goal of classical public-key encryption, namely, to establish secure communication over insecure channels.
Expand
Eric Sageloli, Pierre Pébereau, Pierrick Méaux, Céline Chevalier
ePrint Report ePrint Report
We provide identity-based signature (IBS) schemes with tight security against adaptive adversaries, in the (classical or quantum) random oracle model (ROM or QROM), in both unstructured and structured lattices, based on the SIS or RSIS assumption. These signatures are short (of size independent of the message length). Our schemes build upon a work from Pan and Wagner (PQCrypto’21) and improve on it in several ways. First, we prove their transformation from non-adaptive to adaptive IBS in the QROM. Then, we simplify the parameters used and give concrete values. Finally, we simplify the signature scheme by using a non-homogeneous relation, which helps us reduce the size of the signature and get rid of one costly trapdoor delegation. On the whole, we get better security bounds, shorter signatures and faster algorithms.
Expand
Sagnik Saha, Nikolaj Ignatieff Schwartzbach, Prashant Nalini Vasudevan
ePrint Report ePrint Report
In the average-case $k$-SUM problem, given $r$ integers chosen uniformly at random from $\{0,\ldots,M-1\}$, the objective is to find a set of $k$ numbers that sum to 0 modulo $M$ (this set is called a "solution"). In the related $k$-XOR problem, given $k$ uniformly random Boolean vectors of length log $M$, the objective is to find a set of $k$ of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis.

The feasibility and complexity of these problems depends on the relative values of $k$, $r$, and $M$. The dense regime of $M \leq r^k$, where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of $M\gg r^k$, where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings.

We study the planted $k$-SUM and $k$-XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As $M$ increases past $r^k$, these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems.

Conditional Lower Bounds. Assuming established conjectures about the hardness of average-case (non-planted) $k$-SUM when $M = r^k$, we show non-trivial lower bounds on the running time of algorithms for planted $k$-SUM when $r^k\leq M\leq r^{2k}$. We show the same for $k$-XOR as well.

Search-to-Decision Reduction. For any $M>r^k$, suppose there is an algorithm running in time $T$ that can distinguish between a random $k$-SUM instance and a random instance with a planted solution, with success probability $(1-o(1))$. Then, for the same $M$, there is an algorithm running in time $\tilde{O}(T)$ that solves planted $k$-SUM with constant probability. The same holds for $k$-XOR as well.

Hardness Amplification. For any $M \geq r^k$, if an algorithm running in time $T$ solves planted $k$-XOR with success probability $\Omega(1/\text{polylog}(r))$, then there is an algorithm running in time $\tilde O(T)$ that solves it with probability $(1-o(1))$. We show this by constructing a rapidly mixing random walk over $k$-XOR instances that preserves the planted solution.

Cryptography. For some $M \leq 2^{\text{polylog}(r)}$, the hardness of the $k$-XOR problem can be used to construct Public-Key Encryption (PKE) assuming that the Learning Parity with Noise (LPN) problem with constant noise rate is hard for $2^{n^{0.01}}$-time algorithms. Previous constructions of PKE from LPN needed either a noise rate of $O(1/\sqrt{n})$, or hardness for $2^{n^{0.5}}$-time algorithms.

Algorithms. For any $M \geq 2^{r^2}$, there is a constant $c$ (independent of $k$) and an algorithm running in time $r^c$ that, for any $k$, solves planted $k$-SUM with success probability $\Omega(1/8^k)$. We get this by showing an average-case reduction from planted $k$-SUM to the Subset Sum problem. For $r^k \leq M \ll 2^{r^2}$, the best known algorithms are still the worst-case $k$-SUM algorithms running in time $r^{\lceil{k/2}\rceil-o(1)}$.
Expand
Nouri Alnahawi, Nicolai Schmitt, Alexander Wiesmaier, Andreas Heinemann, Tobias Grasmeyer
ePrint Report ePrint Report
The demand for crypto-agility, although dating back for more than two decades, recently started to increase in the light of the expected post-quantum cryptography (PQC) migration. Nevertheless, it started to evolve into a science on its own. Therefore, it is important to establish a unified definition of the notion, as well as its related aspects, scope, and practical applications. This paper presents a literature survey on crypto-agility and discusses respective development efforts categorized into different areas, including requirements, characteristics, and possible challenges. We explore the need for crypto-agility beyond PQC algorithms and security protocols and shed some light on current solutions, existing automation mechanisms, and best practices in this field. We evaluate the state of readiness for crypto-agility, and offer a discussion on the identified open issues. The results of our survey indicate a need for a comprehensive understanding. Further, more agile design paradigms are required in developing new IT systems, and in refactoring existing ones, in order to realize crypto-agility on a broad scale.
Expand
Yiping Ma, Jess Woods, Sebastian Angel, Antigoni Polychroniadou, Tal Rabin
ePrint Report ePrint Report
This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients that fits the stringent needs of federated learning settings. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Prior works, designed for a single round, when adapted to the multi-round setting, require all clients to establish pairwise secrets per round, which is onerous when the number of clients is large and clients have varying network conditions. Flamingo introduces a novel protocol for reusing pairwise secrets to reduce the overall communication-round complexity and a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. These techniques help Flamingo reduce the end-to-end runtime and the number of interactions between clients and the server for a full training session over prior work. We implement and evaluate Flamingo and show that it can effectively be used to securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system.
Expand
Martin R. Albrecht, Sofía Celi, Benjamin Dowling, Daniel Jones
ePrint Report ePrint Report
We report several practically-exploitable cryptographic vulnerabilities in the Matrix standard for federated real-time communication and its flagship client and prototype implementation, Element. These, together, invalidate the confidentiality and authentication guarantees claimed by Matrix against a malicious server. This is despite Matrix’ cryptographic routines being constructed from well-known and -studied cryptographic building blocks. The vulnerabilities we exploit differ in their nature (insecure by design, protocol confusion, lack of domain separation, implementation bugs) and are distributed broadly across the different subprotocols and libraries that make up the cryptographic core of Matrix and Element. Together, these vulnerabilities highlight the need for a systematic and formal analysis of the cryptography in the Matrix standard.
Expand
Kamyar Mohajerani, Luke Beckwith, Abubakr Abdulgadir, Eduardo Ferrufino, Jens-Peter Kaps, Kris Gaj
ePrint Report ePrint Report
Side-channel resistance is one of the primary criteria identified by NIST for use in evaluating candidates in the Lightweight Cryptography (LWC) Standardization process. In Rounds 1 and 2 of this process, when the number of candidates was still substantial (56 and 32, respectively), evaluating this feature was close to impossible. With ten finalists remaining, side-channel resistance and its effect on the performance and cost of practical implementations became of utmost importance. In this paper, we describe a general framework for evaluating the side-channel resistance of LWC candidates using resources, experience, and general practices of the cryptographic engineering community developed over the last two decades. The primary features of our approach are a) self-identification and self-characterization of side-channel security evaluation labs, b) distributed development of protected hardware and software implementations, matching certain high-level requirements and deliverable formats, and c) dynamic and transparent matching of evaluators with implementers in order to achieve the most meaningful and fair evaluation report. After the classes of hardware implementations with similar resistance to side-channel attacks are established, these implementations are comprehensively benchmarked using Xilinx Artix-7 FPGAs. All implementations belonging to the same class are then ranked according to several performance and cost metrics. Four candidates - Ascon, Xoodyak, TinyJAMBU, and ISAP - are selected as offering unique advantages over other finalists in terms of the throughput, area, throughput-to-area ratio, or randomness requirements of their protected hardware implementations.
Expand
Uddipana Dowerah, Subhranil Dutta, Aikaterini Mitrokotsa, Sayantan Mukherjee, Tapas Pal
ePrint Report ePrint Report
Predicate inner product functional encryption (P-IPFE) is essentially attribute-based IPFE (AB-IPFE) which additionally hides attributes associated to ciphertexts. In a P-IPFE, a message x is encrypted under an attribute w and a secret key is generated for a pair (y, v) such that recovery of ⟨x, y⟩ requires the vectors w, v to satisfy a linear relation. We call a P-IPFE unbounded if it can encrypt unbounded length attributes and message vectors. • zero predicate IPFE. We construct the first unbounded zero predicate IPFE (UZP-IPFE) which recovers ⟨x,y⟩ if ⟨w,v⟩ = 0. This construction is inspired by the unbounded IPFE of Tomida and Takashima (ASIACRYPT 2018) and the unbounded zero inner product encryption of Okamoto and Takashima (ASIACRYPT 2012). The UZP-IPFE stands secure against general attackers capable of decrypting the challenge ciphertext. Concretely, it provides full attribute-hiding security in the indistinguishability-based semi-adaptive model under the standard symmetric external Diffie-Hellman assumption. • non-zero predicate IPFE. We present the first unbounded non-zero predicate IPFE (UNP-IPFE) that successfully recovers ⟨x, y⟩ if ⟨w, v⟩ ≠ 0. We generically transform an unbounded quadratic FE (UQFE) scheme to weak attribute-hiding UNP-IPFE in both public and secret key settings. Interestingly, our secret key simulation secure UNP-IPFE has succinct secret keys and is constructed from a novel succinct UQFE that we build in the random oracle model. We leave the problem of constructing a succinct public key UNP-IPFE or UQFE in the standard model as an important open problem.
Expand
◄ Previous Next ►