International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

07 October 2021

Julien Duman, Kathrin Hövelmanns, Eike Kiltz, Vadim Lyubashevsky, Gregor Seiler, Dominique Unruh
ePrint Report ePrint Report
Cryptography based on the hardness of lattice problems over polynomial rings currently provides the most practical solution for public key encryption in the quantum era. The first encryption scheme utilizing properties of polynomial rings was NTRU (ANTS '98), but in the recent decade, most research has focused on constructing schemes based on the hardness of the somewhat related Ring/Module-LWE problem. Indeed, 14 out of the 17 encryption schemes based on the hardness of lattice problems in polynomial rings submitted to the first round of the NIST standardization process used some version of Ring/Module-LWE, with the other three being based on NTRU.

The preference for using Ring/Module-LWE is due to the fact that this problem is at least as hard as NTRU, is more flexible in the algebraic structure due to the fact that no polynomial division is necessary, and that the decryption error is independent of the message. And indeed, the practical NTRU encryption schemes in the literature generally lag their Ring/Module-LWE counterparts in either compactness or speed, or both.

In this paper, we put the efficiency of NTRU-based schemes on equal (even slightly better, actually) footing with their Ring/Module-LWE counterparts. We provide several instantiations and transformations, with security given in the ROM and the QROM, that detach the decryption error from the message, thus eliminating the adversary's power to have any effect on it, which ultimately allows us to decrease parameter sizes. The resulting schemes are on par, compactness-wise, with their counterparts based on Ring/Module-LWE. Performance-wise, the NTRU schemes instantiated in this paper over NTT-friendly rings of the form $Z_q[X]/(X^d-X^{d/2}+1)$ are the fastest of all public key encryption schemes, whether quantum-safe or not. When compared to the NIST finalist NTRU-HRSS-701, our scheme is $15\%$ more compact and has a $15$X improvement in the round-trip time of ephemeral key exchange, with key generation being $35$X faster, encapsulation being $6$X faster, and decapsulation enjoying a $9$X speedup.
Expand
Julien Duman, Eike Kiltz, Kathrin Hövelmanns, Vadim Lyubashevsky, Gregor Seiler
ePrint Report ePrint Report
Constructing an efficient CCA-secure KEM is generally done by first constructing a passively-secure PKE scheme, and then applying the Fujisaki-Okamoto (FO) transformation. The original FO transformation was designed to offer security in a single user setting. A stronger notion, known as multi-user security, considers the attacker's advantage in breaking one of many user's ciphertexts. Bellare et al.~(EUROCRYPT 2020) showed that standard single user security implies multi-user security with a multiplicative tightness gap equivalent to the number of users.

To obtain even more confidence in the security of KEMs in the multi-user setting, it is a common design paradigm to also ``domain separate'' the random oracles of each user by including his public key as an input to the hash function. We are not aware of any formal analysis of this technique, but it was at least informally thought to be a computationally cheap way to add security. This design principle was carried over into the FO transformations used by several schemes in the NIST post-quantum standardization effort -- notably the lattice-based schemes Kyber and Saber, which are two of the four KEM finalists.

In this work, we formally analyze domain separation in the context of the FO transformation in the multi-user setting. We first show that including the public key in the hash function is indeed important for the tightness of the security reductions in the ROM and the QROM. At the same time, we show that including the \emph{entire} public key into the hash function is unnecessarily wasteful -- it is enough to include just a small (e.g. $32$ byte) unpredictable part of the key to achieve the same security. Reducing the input of the hash function results in a very noticeable improvement in the running time of the lattice-based KEMs. In particular, using this generic transform results in a 2X - 3X speed-up over the current (Round 3) key generation and encapsulation procedures in Kyber, and up to a $40\%$ improvement in the same functions in Saber.
Expand
Yan Ji, Konstantinos Chalkias
ePrint Report ePrint Report
Proof of liabilities (PoL) allows a prover to prove his/her liabilities to a group of verifiers. This is a cryptographic primitive once used only for proving financial solvency but is also applicable to domains outside finance, including transparent and private donations, new algorithms for disapproval voting and publicly verifiable official reports such as COVID-19 daily cases. These applications share a common nature in incentives: it's not in the prover's interest to increase his/her total liabilities. We generalize PoL for these applications by attempting for the first time to standardize the goals it should achieve from security, privacy and efficiency perspectives. We also propose DAPOL+, a concrete PoL scheme extending the state-of-the-art DAPOL protocol but providing provable security and privacy, with benchmark results demonstrating its practicality. In addition, we explore techniques to provide additional features that might be desired in different applications of PoL and measure the asymptotic probability of failure.
Expand
Saikrishna Badrinarayanan, Peihan Miao, Tiancheng Xie
ePrint Report ePrint Report
Private set intersection (PSI) allows two mutually distrusting parties each with a set as input, to learn the intersection of both their sets without revealing anything more about their respective input sets. Traditionally, PSI studies the static setting where the computation is performed only once on both parties' input sets. We initiate the study of updatable private set intersection (UPSI), which allows parties to compute the intersection of their private sets on a regular basis with sets that also constantly get updated. We consider two specific settings. In the first setting called UPSI with addition, parties can add new elements to their old sets. We construct two protocols in this setting, one allowing both parties to learn the output and the other only allowing one party to learn the output. In the second setting called UPSI with weak deletion, parties can additionally delete their old elements every $t$ days. We present a protocol for this setting allowing both parties to learn the output. All our protocols are secure against semi-honest adversaries and have the guarantee that both the computational and communication complexity only grow with the set updates instead of the entire sets.

Finally, we implement our UPSI with addition protocols and compare with the state-of-the-art PSI protocols. Our protocols compare favorably when the total set size is sufficiently large, the new updates are sufficiently small, or in networks with low bandwidth.
Expand
Xavier Bonnetain, André Schrottenloher, Ferdinand Sibleyras
ePrint Report ePrint Report
In this paper, we report the first quantum key-recovery attack on a symmetric block cipher design, using classical queries only, with a more than quadratic time speedup compared to the best classical attack.

We study the 2XOR-Cascade construction of Ga{\v{z}}i and Tessaro (EUROCRYPT~2012). It is a key length extension technique which provides an n-bit block cipher with 5n/2 bits of security out of an n-bit block cipher with 2n bits of key, with a security proof in the ideal model. We show that the offline-Simon algorithm of Bonnetain et al. (ASIACRYPT~2019) can be extended to, in particular, attack this construction in quantum time Õ(2^n), providing a 2.5 quantum speedup over the best classical attack.

Regarding post-quantum security of symmetric ciphers, it is commonly assumed that doubling the key sizes is a sufficient precaution. This is because Grover's quantum search algorithm, and its derivatives, can only reach a quadratic speedup at most. Our attack shows that the structure of some symmetric constructions can be exploited to overcome this limit. In particular, the 2XOR-Cascade cannot be used to generically strengthen block ciphers against quantum adversaries, as it would offer only the same security as the block cipher itself.
Expand
Zhaomin Yang, Xiang Xie, Huajie Shen, Shiying Chen, Jun Zhou
ePrint Report ePrint Report
We present fully homomorphic encryption schemes for fixed-point arithmetic with fixed precision. Our scheme achieves $\mathsf{IND}$-$\mathsf{CPA^D}$ security and uses $\mathsf{RLWE}$ ring with dimension ${2^{13}}$ or less. Our techniques could also be extended to construct fully homomorphic encryption schemes for approximate numbers with $\mathsf{IND}$-$\mathsf{CPA}$ security. The bootstrapping process of our $\mathsf{IND}$-$\mathsf{CPA}$ scheme preserves about 39-bit precision with ring dimension $2^{13}$, which is the first construction that preserves high precision while keeping the parameters small.

The core technique in this paper is a new and efficient functional bootstrapping algorithm that avoids the negacyclicity constraint of the evaluated functions, which enables us to extract bits blocks homomorphically. This new functional bootstrapping algorithm could be applied to BFV and TFHE schemes as well, and is of independent interest.
Expand
Sébastien Canard, Nicolas Desmoulins, Sébastien Hallay, Adel Hamdi, Dominique Le Hello
ePrint Report ePrint Report
The preponderance of smart devices, such as smartphones, has boosted the development and use of mobile applications (apps) in the recent years. This prevalence induces a large volume of mobile app usage data. The analysis of such information could lead to a better understanding of users' behaviours in using the apps they have installed, even more if these data can be coupled with a given context (location, time, date, sociological data...). However, mobile and apps usage data are very sensitive, and are today considered as personal. Their collection and use pose serious concerns associated with individuals' privacy. To reconcile harnessing of data and privacy of users, we investigate in this paper the possibility to conduct privacy-preserving mobile data usage statistics that will prevent any inference or re-identification risks. The key idea is for each user to encrypt their (private and sensitive) inputs before sending them to the data processor. The possibility to perform statistics on those data is then possible thanks to the use of functional encryption, a cryptographic building block permitting to perform some allowed operations over encrypted data. In this paper, we first show how it is possible to obtain such individuals' usage of their apps, which step is necessary for our use case, but can at the same time pose some security problems w.r.t. those apps. We then design our new encryption scheme, adding some fault tolerance property to a recent dynamic decentralized function encryption scheme. We finally show how we have implemented all that, and give some benchmarks.
Expand
Subhadeep Banik, Khashayar Barooti, Serge Vaudenay, Hailun Yan
ePrint Report ePrint Report
Cryptanalysis of the LowMC block cipher when the attacker has access to a single known plaintext/ciphertext pair is a mathematically challenging problem. This is because the attacker is unable to employ most of the standard techniques in symmetric cryptography like linear and differential cryptanalysis. This scenario is particularly relevant while arguing the security of the \picnic digital signature scheme in which the plaintext/ciphertext pair generated by the LowMC block cipher serves as the public (verification) key and the corresponding LowMC encryption key also serves as the secret (signing) key of the signature scheme. In the paper by Banik et al. (IACR ToSC 2020:4), the authors used a linearization technique of the LowMC S-box to mount attacks on some instances of the block cipher. In this paper, we first make a more precise complexity analysis of the linearization attack. Then, we show how to perform a 2-stage MITM attack on LowMC. The first stage reduces the key candidates corresponding to a fraction of key bits of the master key. The second MITM stage between this reduced candidate set and the remaining fraction of key bits successfully recovers the master key. We show that the combined computational complexity of both these stages is significantly lower than those reported in the ToSC paper by Banik et al.
Expand
Jan Richter-Brockmann, Ming-Shing Chen, Santosh Ghosh, Tim Güneysu
ePrint Report ePrint Report
BIKE is a Key Encapsulation Mechanism selected as an alternate candidate in NIST’s PQC standardization process, in which performance plays a significant role in the third round. This paper presents FPGA implementations of BIKE with the best area-time performance reported in literature. We optimize two key arithmetic operations, which are the sparse polynomial multiplication and the polynomial inversion. Our sparse multiplier achieves time-constancy for sparse polynomials of indefinite Hamming weight used in BIKE’s encapsulation. The polynomial inversion is based on the extended Euclidean algorithm, which is unprecedented in current BIKE implementations. Our optimized design results in a 5.5 times faster key generation compared to previous implementations based on Fermat’s little theorem.

Besides the arithmetic optimizations, we present a united hardware design of BIKE with shared resources and shared sub-modules among KEM functionalities. On Xilinx Artix-7 FPGAs, our light-weight implementation consumes only 3 777 slices and performs a key generation, encapsulation, and decapsulation in 3 797 µs, 443 µs, and 6 896 µs, respectively. Our high-speed design requires 7 332 slices and performs the three KEM operations in 1 672 µs, 132 µs, and 1 892 µs, respectively.
Expand
Hanlin Liu, Yu Yu
ePrint Report ePrint Report
Blum, Kalai and Wasserman (JACM 2003) gave the first sub-exponential algorithm to solve the Learning Parity with Noise (LPN) problem. In particular, consider the LPN problem with constant noise $\mu=(1-\gamma)/2$. The BKW solves it with space complexity $2^{\frac{(1+\epsilon)n}{\log n}}$ and time/sample complexity $2^{\frac{(1+\epsilon)n}{\log n}}\cdot 2^{O(n^{\frac{1}{1+\epsilon}})}$ for small constant $\epsilon\to 0^+$. We propose a variant of the BKW by tweaking Wagner's generalized birthday problem (Crypto 2002) and adapting the technique to a $c$-ary tree structure. In summary, our algorithm achieves the following:

(Time-space tradeoff). We obtain the same time-space tradeoffs for LPN and LWE as those given by Esser et al. (Crypto 2018), but without resorting to any heuristics. For any $2\leq c\in\mathbb{N}$, our algorithm solves the LPN problem with time/sample complexity $2^{\frac{\log c(1+\epsilon)n}{\log n}}\cdot 2^{O(n^{\frac{1}{1+\epsilon}})}$ and space complexity $2^{\frac{\log c(1+\epsilon)n}{(c-1)\log n}}$, where one can use Grover's quantum algorithm or Dinur et al.'s dissection technique (Crypto 2012) to further accelerate/optimize the time complexity.

(Time/sample optimization). A further adjusted variant of our algorithm solves the LPN problem with sample, time and space complexities all kept at $2^{\frac{(1+\epsilon)n}{\log n}}$ for $\epsilon\to 0^+$, saving factor $2^{\Omega(n^{\frac{1}{1+\epsilon}})}$ in time/sample compared to the original BKW, and the variant of Devadas et al. (TCC 2017). This benefits from a careful analysis of the error distribution among the correlated candidates, and therefore avoids repeating the same process $2^{\Omega(n^{\frac{1}{1+\epsilon}})}$ times on fresh new samples.

(Sample reduction) Our algorithm provides an alternative to Lyubashevsky's BKW variant (RANDOM 2005) for LPN with a restricted amount of samples. In particular, given $Q=n^{1+\epsilon}$ (resp., $Q=2^{n^{\epsilon}}$) samples, our algorithm saves a factor of $2^{\Omega(n)/(\log n)^{1-\kappa}}$ (resp., $2^{\Omega(n^{\kappa})}$) for constant $\kappa \to 1^-$ in running time while consuming roughly the same space, compared with Lyubashevsky's algorithm.

We seek to bridge the gaps between theoretical and heuristic LPN solvers, but take a different approach from Devadas et al. (TCC 2017). We exploit weak yet sufficient conditions (e.g., pairwise independence), and the analysis uses only elementary tools (e.g., Chebyshev's inequality).
Expand
Dan Boneh, Wilson Nguyen, Alex Ozdemir
ePrint Report ePrint Report
We construct efficient functional commitments for all bounded size arithmetic circuits. A (function hiding) functional commitment scheme allows a committer to commit to a secret function f and later prove that y = f(x) for public x and y—without revealing any other information about f. Thus, functional commitments allow the operator of a secret process to prove that the process is being applied uniformly to everyone. Possible applications include bail decisions, credit scores, online ranking algorithms, and proprietary software-as-a-service. To build functional commitments, we introduce a new type of protocol: a proof of function relation (PFR) to show that a committed relation is a function. We show that combining a suitable preprocessing zk-SNARK with a PFR yields a secure functional commitment scheme. We then construct efficient PFRs for two popular preprocessing zk-SNARKs, and obtain two functional commitment schemes for arithmetic circuits. These constructions build on polynomial commitments (a special case of functional commitments), so our work shows that polynomial commitments are “complete” for functional commitments.
Expand

05 October 2021

Thomas Agrikola, Geoffroy Couteau, Sven Maier
ePrint Report ePrint Report
The goal of anonymous whistleblowing is to publicly disclose a message while at the same time hiding the identity of the sender in a way that even if suspected of being the sender, this cannot be proven. While many solutions to this problem have been proposed over the years, they all require some form of interaction with trusted or non-colluding parties. In this work, we ask whether this is fundamentally inherent. We put forth the notion of anonymous transfer as a primitive allowing to solve this problem without relying on any participating trusted parties.

We initiate the theoretical study of this question, and derive negative and positive results on the existence of such a protocol. We refute the feasibility of asymptotically secure anonymous transfer, where the message will be received with overwhelming probability while at the same time the identity of the sender remains hidden with overwhelming probability. On the other hand, resorting to fine-grained cryptography, we provide a heuristic instantiation (assuming ideal obfuscation) which guarantees that the message will be correctly received with overwhelming probability and the identity of the sender leaks with vanishing probability. Our results provide strong foundations for the study of the possibility of anonymous communications through authenticated channels, an intriguing goal which we believe to be of fundamental interest.
Expand
Eik List
ePrint Report ePrint Report
Leakage-resilient authenticated encryption (AE) schemes received considerable attention during the previous decade. Two core security models of bounded and unbounded leakage have evolved, where the latter has been motivated in a very detailed and practice-oriented manner. In that setting, designers often build schemes based on (tweakable) block ciphers due to the small state size, such as the recent two-pass AE scheme TEDT from TCHES 1/2020. TEDT is interesting due to its high security guarantees of O(n - log(n^2))-bit integrity under leakage and similar AE security in the black-box setting. Though, a detail limited it to provide only n/2-bit privacy under leakage. In this work, we extend TEDT to TEDT2 in three aspects with the help of a tweakable block cipher with a 3n-bit tweakey: we (1) adopt the idea from the design team of Romulus of replacing TEDT's previous internal hash function with Naito's MDPH, (2) move the nonce from the hash to the tag-generation function both for more efficiency, and (3) strengthen the security of the encryption to obtain beyond-birthday-bound security also under leakage.
Expand
Luk Bettale, Simon Montoya, Guénaël Renault
ePrint Report ePrint Report
The NIST selection process for standardizing Post-Quantum Cryptography Mechanisms is currently running. Many papers already studied their theoretical security, but the resistance in deployed device has not been much investigated so far. In particular, fault attack is a serious threat for algorithms implemented in embedded devices. One particularly powerful technique is to use safe-error attacks. Such attacks exploit the fact that a specific fault may or may not lead to a faulty output depending on a secret value. In this paper, we investigate the resistance of various Post-Quantum candidates algorithms against such attacks.
Expand
Dongxi Liu
ePrint Report ePrint Report
We propose a new hard problem, called the Embedded Multilayer Equations (eMLE) problem in this paper. An example of eMLE, with one secret variable x and three layers, is given below.

6268 = 57240 * x + (1248 * x + (9 * x mod 16) mod 2053) mod 65699

In this example, the eMLE problem is to find x from the above equation. eMLE in this paper has the same number of variables and equations. The hardness of eMLE problem lies in its layered structure. Without knowing the eMLE value of lower layer (i.e., the layer with modulus 2053), the top layer (i.e., the layer with modulus 65699) has many candidate solutions; the adversary has to search the solution space for a few valid ones. A lower-bound for the number of searches has been proven in the paper, together with the expected number of valid solutions. The hardness of eMLE can be increased by adding more layers, without changing the number of variables and equations; no existing NP-complete problems have this feature. Over the hardness of eMLE, a post-quantum signature scheme, eMLE-Sig, is constructed. Compared with all existing signature schemes (conventional and post-quantum), eMLE-Sig might be the simplest to understand, analyze, instantiate, and implement. At the security level above 128 bits, five configurations are provided; all of them have keys and signatures smaller than RSA keys and signatures (above 380 bytes) at the 128-bit security level. The smallest configuration is with two variables and three layers, having 84.1/52.2 bytes for private/public key and 168.4 bytes for signatures.
Expand
Zeyu Liu, Daniele Micciancio, Yuriy Polyakov
ePrint Report ePrint Report
A comparison of two encrypted numbers is an important operation needed in many machine learning applications, for example, decision tree or neural network inference/training. An efficient instantiation of this operation in the context of fully homomorphic encryption (FHE) can be challenging, especially when a relatively high precision is sought. The conventional FHE way of evaluating the comparison operation, which is based on the sign function evaluation using FHEW/TFHE bootstrapping, can only support very small precision (practically limited to 5 bits or so). For higher precision, the runtime complexity scales linearly with the ciphertext (plaintext) modulus (i.e., exponentially with the modulus bit size). We propose sign function evaluation algorithms that scale logarithmically with the ciphertext (plaintext) modulus, enabling the support of large-precision comparison in practice. Our sign evaluation algorithms are based on an iterative use of homomorphic floor function algorithms, which are also derived in our work. Further, we generalize our procedures for floor function evaluation to arbitrary function evaluation, which can be used to support both small plaintext moduli (directly) and larger plaintext moduli (by using a homomorphic digit decomposition algorithm, also suggested in our work). We implement all these algorithms using the PALISADE lattice cryptography library, introducing several implementation-specific optimizations along the way, and discuss our experimental results.
Expand
Dakshita Khurana, Akshayaram Srinivasan
ePrint Report ePrint Report
Recent exciting breakthroughs, starting with the work of Chattopadhyay and Zuckerman (STOC 2016) have achieved the first two-source extractors that operate in the low min-entropy regime. Unfortunately, these constructions suffer from non-negligible error, and reducing the error to negligible remains an important open problem. In recent work, Garg, Kalai, and Khurana (GKK, Eurocrypt 2020) investigated a meaningful relaxation of this problem to the computational setting, in the presence of a common random string (CRS). In this relaxed model, their work built explicit two-source extractors for a restricted class of unbalanced sources with min-entropy $n^{\gamma}$ (for some constant $\gamma$) and negligible error, under the sub-exponential DDH assumption.

In this work, we investigate whether computational extractors in the CRS model be applied to more challenging environments. Specifically, we study network extractor protocols (Kalai et al., FOCS 2008) and extractors for adversarial sources (Chattopadhyay et al., STOC 2020) in the CRS model. We observe that these settings require extractors that work well for balanced sources, making the GKK results inapplicable. We remedy this situation by obtaining the following results, all of which are in the CRS model and assume the sub-exponential hardness of DDH.

- We obtain ``optimal'' computational two-source and non-malleable extractors for balanced sources: requiring both sources to have only poly-logarithmic min-entropy, and achieving negligible error. To obtain this result, we perform a tighter and arguably simpler analysis of the GKK extractor. - We obtain a single-round network extractor protocol for poly-logarithmic min-entropy sources that tolerates an optimal number of adversarial corruptions. Prior work in the information-theoretic setting required sources with high min-entropy rates, and in the computational setting had round complexity that grew with the number of parties, required sources with linear min-entropy, and relied on exponential hardness (albeit without a CRS). - We obtain an ``optimal'' {\em adversarial source extractor} for poly-logarithmic min-entropy sources, where the number of honest sources is only 2 and each corrupted source can depend on either one of the honest sources. Prior work in the information-theoretic setting had to assume a large number of honest sources.
Expand
Ilia Iliashenko, Christophe Nègre, Vincent Zucca
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) gives the ability to evaluate any function over encrypted data. However, despite numerous improvements during the last decade, the computational overhead caused by homomorphic computations is still very important. As a consequence, optimizing the way of performing the computations homomorphically remains fundamental. Several popular FHE schemes such as BGV and BFV encode their data, and thus perform their computations, in finite fields. In this work, we study and exploit algebraic relations occurring in prime characteristic allowing to speed-up the homomorphic evaluation of several functions over prime fields.

More specifically we give several examples of unary functions: "modulo", "is power of $b$" and "Hamming weight" and "Mod2'" whose homomorphic evaluation complexity over $\mathbb{F}_p$ can be reduced from the generic bound $\sqrt{2p} + \mathcal{O}(\log(p))$ homomorphic multiplications, to $\sqrt{p} + \mathcal{O}(\log(p))$, $\mathcal{O}(\log (p))$, $\mathcal{O}(\sqrt{p/\log (p)})$ and $\mathcal{O}(\sqrt{p/(\log (p))})$ respectively. Additionally we provide a proof of a recent claim regarding the structure of the polynomial interpolation of the "less-than" bivariate function which confirms that this function can be evaluated in $2p-6$ homomorphic multiplications instead of $3p-5$ over $\mathbb{F}_p$ for $p\geq 5$.
Expand
Aayush Jain, Huijia Lin, Amit Sahai
ePrint Report ePrint Report
In this work, we study what minimal sets of assumptions suffice for constructing indistinguishability obfuscation ($i\mathcal{O}$). We prove:

{\bf Theorem}(Informal): Assume sub-exponential security of the following assumptions:

- the Learning Parity with Noise ($\mathsf{LPN}$) assumption over general prime fields $\mathbb{F}_p$ with polynomially many $\mathsf{LPN}$ samples and error rate $1/k^\delta$, where $k$ is the dimension of the $\mathsf{LPN}$ secret, and $\delta>0$ is any constant;

- the existence of a Boolean Pseudo-Random Generator ($\mathsf{PRG}$) in $\mathsf{NC}^0$ with stretch $n^{1+\tau}$, where $n$ is the length of the $\mathsf{PRG}$ seed, and $\tau>0$ is any constant;

- the Decision Linear ($\mathsf{DLIN}$) assumption on symmetric bilinear groups of prime order.

Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exists. Further, assuming only polynomial security of the aforementioned assumptions, there exists collusion resistant public-key functional encryption for all polynomial-size circuits.}

This removes the reliance on the Learning With Errors (LWE) assumption from the recent work of [Jain, Lin, Sahai STOC'21]. As a consequence, we obtain the first fully homomorphic encryption scheme that does not rely on any lattice-based hardness assumption. Our techniques feature a new notion of randomized encoding called Preprocessing Randomized Encoding (PRE) that, essentially, can be computed in the exponent of pairing groups. When combined with other new techniques, PRE gives a much more streamlined construction of $\iO$ while still maintaining reliance only on well-studied assumptions.
Expand
Thomas Pornin
ePrint Report ePrint Report
Lossless compression algorithms such as DEFLATE strive to reliably process arbitrary inputs, while achieving compressed sizes as low as possible for commonly encountered data inputs. It is well-known that it is mathematically impossible for a compression algorithm to simultaneously achieve non-trivial compression on some inputs (i.e. compress these inputs into strictly shorter outputs) and to never expand any other input (i.e. guaranteeing that all inputs will be compressed into an output which is no longer than the input); this is a direct application of the "pigeonhole principle". Despite their mathematical impossibility, we show in this paper how to build such paradoxical compression and decompression algorithms, with the aid of some tools from cryptography, notably verifiable delay functions, and, of course, by slightly cheating.
Expand
◄ Previous Next ►