International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

03 June 2019

Nico Dottling, Sanjam Garg, Yuval Ishai, Giulio Malavolta, Tamer Mour, Rafail Ostrovsky
ePrint Report ePrint Report
We introduce a new primitive, called trapdoor hash functions (TDH), which are hash functions $H: \{0,1\}^n \rightarrow \{0,1\}^\textrm{sec}$ with additional trapdoor function-like properties. Specifically, given an index $i\in[n]$, TDHs allow for sampling an encoding key $\textrm{ek}$ (that hides $i$) along with a corresponding trapdoor. Furthermore, given $\mathsf{H}(x)$, a hint value $\mathsf{E}(\textrm{ek},x)$, and the trapdoor corresponding to $\textrm{ek}$, the $i^{th}$ bit of $x$ can be efficiently recovered. In this setting, one of our main questions is: How small can the hint value $\mathsf{E}(\textrm{ek},x)$ be? We obtain constructions where the hint is only one bit long based on DDH, QR, DCR, or LWE.

This primitive opens a floodgate of applications for low-communication secure computation.

We mainly focus on two-message protocols between a receiver and a sender, with private inputs $x$ and $y$, resp., where the receiver should learn $f(x,y)$. We wish to optimize the (download) rate of such protocols, namely the asymptotic ratio between the size of the output and the sender's message. Using TDHs, we obtain:

1. The first protocols for (two-message) rate-1 string OT based on DDH, QR, or LWE. This has several useful consequences, such as:

(a) The first constructions of PIR with communication cost poly-logarithmic in the database size based on DDH or QR. These protocols are in fact rate-1 when considering block PIR.

(b) The first constructions of a semi-compact homomorphic encryption scheme for branching programs, where the encrypted output grows only with the program length, based on DDH or QR.

(c) The first constructions of lossy trapdoor functions with input to output ratio approaching 1 based on DDH, QR or LWE.

(d) The first constant-rate LWE-based construction of a 2-message ``statistically sender-private'' OT protocol in the plain model.

2. The first rate-1 protocols (under any assumption) for $n$ parallel OTs and matrix-vector products from DDH, QR or LWE.

We further consider the setting where $f$ evaluates a RAM program $y$ with running time $T\ll |x|$ on $x$. We obtain the first protocols with communication sublinear in the size of $x$, namely $T\cdot\sqrt{|x|}$ or $T\cdot\sqrt[3]{|x|}$, based on DDH or, resp., pairings (and correlated-input secure hash functions).
Expand
F.L. Tiplea, S. Iftene, G. Teseleanu, A.-M. Nica
ePrint Report ePrint Report
We develop exact formulas for the distribution of quadratic residues and non-residues in sets of the form $a+X=\{(a+x)\bmod n\mid x\in X\}$, where $n$ is a prime or the product of two primes and $X$ is a subset of integers with given Jacobi symbols modulo prime factors of $n$. We then present applications of these formulas to Cocks' identity-based encryption scheme and statistical indistinguishability.
Expand
Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, Amit Sahai
ePrint Report ePrint Report
Is it possible to measure a physical object in a way that makes the measurement signals unintelligible to an external observer? Alternatively, can one learn a natural concept by using a contrived training set that makes the labeled examples useless without the line of thought that has led to their choice?

We initiate a study of ``cryptographic sensing'' problems of this type, presenting definitions, positive and negative results, and directions for further research.
Expand
Rishab Goyal, Willy Quach, Brent Waters, Daniel Wichs
ePrint Report ePrint Report
We construct a broadcast and trace scheme (also known as trace and revoke or broadcast, trace and revoke) with $N$ users, where the ciphertext size can be made as low as $O(N^\epsilon)$, for any arbitrarily small constant $\epsilon>0$. This improves on the prior best construction of broadcast and trace under standard assumptions by Boneh and Waters (CCS `06), which had ciphertext size $O(N^{1/2})$. While that construction relied on bilinear maps, ours uses a combination of the learning with errors (LWE) assumption and bilinear maps.

Recall that, in both broadcast encryption and traitor-tracing schemes, there is a collection of $N$ users, each of which gets a different secret key $\textrm{sk}_i$. In broadcast encryption, it is possible to create ciphertexts targeted to a subset $S \subseteq [N]$ of the users such that only those users can decrypt it correctly. In a traitor tracing scheme, if a subset of users gets together and creates a decoder box $D$ that is capable of decrypting ciphertexts, then it is possible to trace at least one of the users responsible for creating $D$. A broadcast and trace scheme intertwines the two properties, in a way that results in more than just their union. In particular, it ensures that if a decoder $D$ is able to decrypt ciphertexts targeted toward a set $S$ of users, then it should be possible to trace one of the users in the set $S$ responsible for creating $D$, even if other users outside of $S$ also participated. As of recently, we have essentially optimal broadcast encryption (Boneh, Gentry, Waters CRYPTO `05) under bilinear maps and traitor tracing (Goyal, Koppula, Waters STOC `18) under LWE, where the ciphertext size is at most poly-logarithmic in $N$. The main contribution of our paper is to carefully combine LWE and bilinear-map based components, and get them to interact with each other, to achieve broadcast and trace.
Expand
Giulio Malavolta, Sri Aravinda Krishnan Thyagarajan
ePrint Report ePrint Report
Time-lock puzzles allow one to encrypt messages for the future, by efficiently generating a puzzle with a solution $s$ that remains hidden until time $T$ has elapsed. The solution is required to be concealed from the eyes of any algorithm running in (parallel) time less than $T$.

We put forth the concept of \emph{homomorphic time-lock puzzles}, where one can evaluate functions over puzzles without solving them, i.e., one can manipulate a set of puzzles with solutions $(s_1, \dots, s_n)$ to obtain a puzzle that solves to $f(s_1, \ldots, s_n)$, for any function $f$. We propose candidate constructions under concrete cryptographic assumptions for different classes of functions. Then we show how homomorphic time-lock puzzles overcome the limitations of classical time-lock puzzles by proposing new protocols for applications of interest, such as e-voting, multi-party coin flipping, and fair contract signing.
Expand
Benny Pinkas, Mike Rosulek, Ni Trieu, Avishay Yanai
ePrint Report ePrint Report
We describe a novel approach for two-party private set intersection (PSI) with semi-honest security. Compared to existing PSI protocols, ours has a more favorable balance between communication and computation. Specifically, our protocol has the lowest monetary cost of any known PSI protocol, when run over the Internet using cloud-based computing services (taking into account current rates for CPU + data). On slow networks (e.g., 10Mbps) our protocol is actually the fastest.

Our novel underlying technique is a variant of oblivious transfer (OT) extension that we call sparse OT extension. Conceptually it can be thought of as a communication-efficient multipoint oblivious PRF evaluation. Our sparse OT technique relies heavily on manipulating high-degree polynomials over large finite fields (i.e. elements whose representation requires hundreds of bits). We introduce extensive algorithmic and engineering improvements for interpolation and multi-point evaluation of such polynomials, which we believe will be of independent interest.

Finally, we present an extensive empirical comparison of state-of-the- art PSI protocols in several application scenarios and along several dimensions of measurement: running time, communication, peak memory consumption, and — arguably the most relevant metric for practice — monetary cost
Expand
Igor Semaev
ePrint Report ePrint Report
The study of non-linearity (linearity) of Boolean function was initiated by Rothaus in 1976. The classical non-linearity of a Boolean function is the minimum Hamming distance of its truth table to that of affine functions. In this note we introduce new "multidimensional" non-linearity parameters $(N_f,H_f)$ for conventional and vectorial Boolean functions $f$ with $m$ coordinates in $n$ variables. The classical non-linearity may be treated as a 1-dimensional parameter in the new definition. $r$-dimensional parameters for $r\geq 2$ are relevant to possible multidimensional extensions of the Fast Correlation Attack in stream ciphers and Linear Cryptanalysis in block ciphers. Besides we introduce a notion of optimal vectorial Boolean functions relevant to the new parameters. For $r=1$ and even $n\geq 2m$ optimal Boolean functions are exactly perfect nonlinear functions (generalizations of Rothaus' bent functions) defined by Nyberg in 1991. By a computer search we find that this property holds for $r=2, m=1, n=4$ too. That is an open problem for larger $n,m$ and $r\geq 2$. The definitions may be easily extended to $q$-ary functions.
Expand
Ariel Hamlin, Justin Holmgren, Mor Weiss, Daniel Wichs
ePrint Report ePrint Report
We initiate the study of fully homomorphic encryption for RAMs (RAM-FHE). This is a public-key encryption scheme where, given an encryption of a large database $D$, anybody can efficiently compute an encryption of $P(D)$ for an arbitrary RAM program $P$. The running time over the encrypted data should be as close as possible to the worst case running time of $P$, which may be sub-linear in the data size.

A central difficulty in constructing a RAM-FHE scheme is hiding the sequence of memory addresses accessed by $P$. This is particularly problematic because an adversary may homomorphically evaluate many programs over the same ciphertext, therefore effectively ``rewinding'' any mechanism for making memory accesses oblivious.

We identify a necessary prerequisite towards constructing RAM-FHE that we call \emph{rewindable oblivious RAM} (rewindable ORAM), which provides security even in this strong adversarial setting. We show how to construct rewindable ORAM using \emph{symmetric-key doubly efficient PIR (SK-DEPIR)} (Canetti-Holmgren-Richelson, Boyle-Ishai-Pass-Wootters: TCC '17). We then show how to use rewindable ORAM, along with virtual black-box (VBB) obfuscation for specific circuits, to construct RAM-FHE. The latter primitive can be heuristically instantiated using existing indistinguishability obfuscation candidates. Overall, we obtain a RAM-FHE scheme where the multiplicative overhead in running time is polylogarithmic in the data size $N$. Our basic scheme is single-hop, but we also extend it to get multi-hop RAM-FHE with overhead $N^\epsilon$ for arbitrarily small $\epsilon>0$.

We view our work as the first evidence that RAM-FHE is likely to exist.
Expand
Cody Freitag, Ilan Komargodski, Rafael Pass
ePrint Report ePrint Report
We introduce the notion of non-uniformly sound certificates: succinct single-message (unidirectional) argument systems that satisfy a ``best-possible security'' against non-uniform polynomial-time attackers. In particular, no polynomial-time attacker with s bits of non-uniform advice can find significantly more than s accepting proofs for false statements. Our first result is a construction of non-uniformly sound certificates for all NP in the random oracle model, where the attacker's advice can depend arbitrarily on the random oracle.

We next show that the existence of non-uniformly sound certificates for P (and collision resistant hash functions) yields a public-coin constant-round fully concurrent zero-knowledge argument for NP.
Expand
Junqing Gong, Brent Waters, Hoeteck Wee
ePrint Report ePrint Report
We present the first attribute-based encryption (ABE) scheme for deterministic finite automaton (DFA) based on static assumptions in bilinear groups; this resolves an open problem posed by Waters (CRYPTO 2012). Our main construction achieves selective security against unbounded collusions under the standard $k$-linear assumption in prime-order bilinear groups, whereas previous constructions all rely on $q$-type assumptions.
Expand
Shweta Agrawal, Monosij Maitra, Shota Yamada
ePrint Report ePrint Report
Constructing Attribute Based Encryption (ABE) [SW05] for uniform models of computation from standard assumptions, is an important problem, about which very little is known. The only known ABE schemes in this setting that i) avoid reliance on multilinear maps or indistinguishability obfuscation, ii) support unbounded length inputs and iii) permit unbounded key requests to the adversary in the security game, are by Waters from Crypto, 2012 [Wat12] and its variants. Waters provided the first ABE for Deterministic Finite Automata (DFA) satisfying the above properties, from a parametrized or ``q-type'' assumption over bilinear maps. Generalizing this construction to Nondeterministic Finite Automata (NFA) was left as an explicit open problem in the same work, and has seen no progress to date. Constructions from other assumptions such as more standard pairing based assumptions, or lattice based assumptions has also proved elusive.

In this work, we construct the first symmetric key attribute based encryption scheme for nondeterministic finite automata (NFA) from the learning with errors (LWE) assumption. Our scheme supports unbounded length inputs as well as unbounded length machines. In more detail, secret keys in our construction are associated with an NFA M of unbounded length, ciphertexts are associated with a tuple (x;m) where x is a public attribute of unbounded length and m is a secret message bit, and decryption recovers m if and only if M(x) = 1.

Further, we leverage our ABE to achieve (restricted notions of) attribute hiding analogous to the circuit setting, obtaining the first predicate encryption and bounded key functional encryption schemes for NFA from LWE. We achieve machine hiding in the single/bounded key setting to obtain the first reusable garbled NFA from standard assumptions. In terms of lower bounds, we show that secret key functional encryption even for DFAs, with security against unbounded key requests implies indistinguishability obfuscation (iO) for circuits; this suggests a barrier in achieving full fledged functional encryption for NFA.
Expand
Rishab Goyal, Sam Kim, Nathan Manohar, Brent Waters, David J. Wu
ePrint Report ePrint Report
A software watermarking scheme enables users to embed a message or mark within a program while preserving its functionality. Moreover, it is difficult for an adversary to remove a watermark from a marked program without corrupting its behavior. Existing constructions of software watermarking from standard assumptions have focused exclusively on watermarking pseudorandom functions (PRFs).

In this work, we study watermarking public-key primitives such as the signing key of a digital signature scheme or the decryption key of a public-key (predicate) encryption scheme. While watermarking public-key primitives might intuitively seem more challenging than watermarking PRFs, our constructions only rely on simple assumptions. Our watermarkable signature scheme can be built from the minimal assumption of one-way functions while our watermarkable public-key encryption scheme can be built from most standard algebraic assumptions that imply public-key encryption (e.g., factoring, discrete log, or lattice assumptions). Our schemes also satisfy a number of appealing properties: public marking, public mark-extraction, and collusion resistance. Our schemes are the first to simultaneously achieve all of these properties.

The key enabler of our new constructions is a relaxed notion of functionality-preserving. While traditionally, we require that a marked program (approximately) preserve the input/output behavior of the original program, in the public-key setting, preserving the "functionality" does not necessarily require preserving the exact input/output behavior. For instance, if we want to mark a signing algorithm, it suffices that the marked algorithm still output valid signatures (even if those signatures might be different from the ones output by the unmarked algorithm). Similarly, if we want to mark a decryption algorithm, it suffices that the marked algorithm correctly decrypt all valid ciphertexts (but may behave differently from the unmarked algorithm on invalid or malformed ciphertexts). Our relaxed notion of functionality-preserving captures the essence of watermarking and still supports the traditional applications, but provides additional flexibility to enable new and simple realizations of this powerful cryptographic notion.
Expand
Andrej Bogdanov, Yuval Ishai, Akshayaram Srinivasan
ePrint Report ePrint Report
We consider the problem of constructing leakage-resilient circuit compilers that are secure against global leakage functions with bounded output length. By global, we mean that the leakage can depend on all circuit wires and output a low-complexity function (represented as a multi-output Boolean circuit) applied on these wires. In this work, we design compilers both in the stateless (a.k.a. single-shot leakage) setting and the stateful (a.k.a. continuous leakage) setting that are unconditionally secure against AC0 leakage and similar low-complexity classes. In the stateless case, we show that the original private circuits construction of Ishai, Sahai, and Wagner (Crypto 2003) is actually secure against AC0 leakage. In the stateful case, we modify the construction of Rothblum (Crypto 2012), obtaining a simple construction with unconditional security. Prior works that designed leakage-resilient circuit compilers against AC0 leakage had to rely either on secure hardware components (Faust et al., Eurocrypt 2010, Miles-Viola, STOC 2013) or on (unproven) complexity-theoretic assumptions (Rothblum, Crypto 2012).
Expand
Vipul Goyal, Aayush Jain, Amit Sahai
ePrint Report ePrint Report
In this work, we explore the question of simultaneous privacy and soundness amplification for non-interactive zero-knowledge argument systems (NIZK). We show that any $\delta_s-$sound and $\delta_z-$zero-knowledge NIZK candidate satisfying $\delta_s+\delta_z=1-\epsilon$, for any constant $\epsilon>0$, can be turned into a computationally sound and zero-knowledge candidate with the only extra assumption of a subexponentially secure public-key encryption.

We develop novel techniques to leverage the use of leakage simulation lemma (Jetchev-Peitzrak TCC 2014) to argue amplification. A crucial component of our result is a new notion for secret sharing $\mathsf{NP}$ instances. We believe that this may be of independent interest.

To achieve this result we analyze following two transformations:

- Parallel Repetition: We show that using parallel repetition any $\delta_s-$sound and $\delta_z-$zero-knowledge NIZK candidate can be turned into (roughly) $\delta^n_s-$sound and $1-(1-\delta_{z})^n-$zero-knowledge candidate. Here $n$ is the repetition parameter.

- MPC based Repetition: We propose a new transformation that amplifies zero-knowledge in the same way that parallel repetition amplifies soundness. We show that using this any $\delta_s-$sound and $\delta_z-$zero-knowledge NIZK candidate can be turned into (roughly) $1-(1-\delta_s)^n-$sound and $2\cdot \delta^n_{z}-$zero-knowledge candidate.

Then we show that using these transformations in a zig-zag fashion we can obtain our result. Finally, we also present a simple transformation which directly turns any NIZK candidate satisfying $\delta_s,\delta_z<1/3 -1/\textrm{poly}(\lambda)$ to a secure one.
Expand
Rio Lavigne, Andrea Lincoln, Virginia Vassilevska Williams
ePrint Report ePrint Report
Cryptography is largely based on unproven assumptions, which, while believable, might fail. Notably if $P = NP$, or if we live in Pessiland, then all current cryptographic assumptions will be broken. A compelling question is if any interesting cryptography might exist in Pessiland.

A natural approach to tackle this question is to base cryptography on an assumption from fine-grained complexity. Ball, Rosen, Sabin, and Vasudevan [BRSV'17] attempted this, starting from popular hardness assumptions, such as the Orthogonal Vectors (OV) Conjecture. They obtained problems that are hard on average, assuming that OV and other problems are hard in the worst case. They obtained proofs of work, and hoped to use their average-case hard problems to build a fine-grained one-way function. Unfortunately, they proved that constructing one using their approach would violate a popular hardness hypothesis. This motivates the search for other fine-grained average-case hard problems.

The main goal of this paper is to identify sufficient properties for a fine-grained average-case assumption that imply cryptographic primitives such as fine-grained public key cryptography (PKC). Our main contribution is a novel construction of a cryptographic key exchange, together with the definition of a small number of relatively weak structural properties, such that if a computational problem satisfies them, our key exchange has provable fine-grained security guarantees, based on the hardness of this problem. We then show that a natural and plausible average-case assumption for the key problem Zero-$k$-Clique from fine-grained complexity satisfies our properties. We also develop fine-grained one-way functions and hardcore bits even under these weaker assumptions.

Where previous works had to assume random oracles or the existence of strong one-way functions to get a key-exchange computable in $O(n)$ time secure against $O(n^2)$ adversaries (see [Merkle'78] and [BGI'08]), our assumptions seem much weaker. Our key exchange has a similar gap between the computation of the honest party and the adversary as prior work, while being non-interactive, implying fine-grained PKC.
Expand
Mihir Bellare, Ruth Ng, Björn Tackmann
ePrint Report ePrint Report
We draw attention to a gap between theory and usage of nonce-based symmetric encryption, under which the way the former treats nonces can result in violation of privacy in the latter. We bridge the gap with a new treatment of nonce-based symmetric encryption that modifies the syntax (decryption no longer takes a nonce), upgrades the security goal (asking that not just messages, but also nonces, be hidden) and gives simple, efficient schemes conforming to the new definitions. We investigate both basic security (holding when nonces are not reused) and advanced security (misuse resistance, providing best-possible guarantees when nonces are reused).
Expand
Shuichi Katsumata, Ryo Nishimaki, Shota Yamada, Takashi Yamakawa
ePrint Report ePrint Report
A non-interactive zero-knowledge (NIZK) protocol allows a prover to non-interactively convince a verifier of the truth of the statement without leaking any other information. In this study, we explore shorter NIZK proofs for all NP languages. Our primary interest is NIZK proofs from falsifiable pairing/pairing-free group-based assumptions. Thus far, NIZKs in the common reference string model (CRS-NIZKs) for NP based on falsifiable pairing-based assumptions all require a proof size at least as large as $O(|C| k)$, where $C$ is a circuit computing the NP relation and $k$ is the security parameter. This holds true even for the weaker designated-verifier NIZKs (DV-NIZKs). Notably, constructing a (CRS, DV)-NIZK with proof size achieving an additive-overhead $O(|C|) + poly(k)$, rather than a multiplicative-overhead $|C| \cdot poly(k)$, based on any falsifiable pairing-based assumptions is an open problem.

In this work, we present various techniques for constructing NIZKs with compact proofs, i.e., proofs smaller than $O(|C|) + poly(k)$, and make progress regarding the above situation. Our result is summarized below.

- We construct CRS-NIZK for all NP with proof size $|C| + poly(k)$ from a (non-static) falsifiable Diffie-Hellman (DH) type assumption over pairing groups. This is the first CRS-NIZK to achieve a compact proof without relying on either lattice-based assumptions or non-falsifiable assumptions. Moreover, a variant of our CRS-NIZK satisfies universal composability (UC) in the erasure-free adaptive setting. Although it is limited to NP relations in NC1, the proof size is $|w| \cdot poly(k)$ where $w$ is the witness, and in particular, it matches the state-of-the-art UC-NIZK proposed by Cohen, shelat, and Wichs (EPRINT'18) based on lattices.

- We construct (multi-theorem) DV-NIZKs for NP with proof size $|C|+poly(k)$ from the computational DH assumption over pairing-free groups. This is the first DV-NIZK that achieves a compact proof from a standard DH type assumption. Moreover, if we further assume the NP relation to be computable in NC1 and assume hardness of a (non-static) falsifiable DH type assumption over pairing-free groups, the proof size can be made as small as $|w| + poly(k)$.

Another related but independent issue is that all (CRS, DV)-NIZKs require the running time of the prover to be at least $|C|\cdot poly(k)$. Considering that there exists NIZKs with efficient verifiers whose running time is strictly smaller than $|C|$, it is an interesting problem whether we can construct prover-efficient NIZKs. To this end, we construct prover-efficient CRS-NIZKs for NP with compact proof through a generic construction using laconic functional evaluation schemes (Quach, Wee, and Wichs (FOCS'18)). This is the first NIZK in any model where the running time of the prover is strictly smaller than the time it takes to compute the circuit $C$ computing the NP relation.

Finally, perhaps of an independent interest, we formalize the notion of homomorphic equivocal commitments, which we use as building blocks to obtain the first result, and show how to construct them from pairing-based assumptions.
Expand
Zhenzhen Bao, Jian Guo, Eik List
ePrint Report ePrint Report
Distinguishers on round-reduced AES have attracted considerable attention in the recent years. Although the number of rounds covered in key-recovery attacks has not been increased since, subspace, yoyo, and multiple-of-n cryptanalysis advanced the understanding of properties of the cipher. Expectation cryptanalysis is an umbrella term for all forms of statistical analysis that try to identify properties whose expectation differs from that of an ideal primitive. For substitution-permutation networks, integral attacks seem a suitable target for extension since they usually end after a linear layer sums several subcomponents. Based on results by Patarin, Chen et al. already observed that the expected number of collisions differs slightly for a sum of permutations from the ideal. Though, their target remained lightweight primitives. The present work applies expectation-based distinguisher from a sum of PRPs to round-reduced AES. We show how to extend the well-known 3-round integral distinguisher to expectation distinguishers over 4 and 5 rounds. In contrast to previous expectation distinguishers by Grassi et al., our approach allows to prepend a round that starts from a diagonal subspace. We demonstrate how the prepended round can be used for key recovery. Moreover, we show how the prepended round can be integrated to form a six-round distinguisher. For all distinguishers, our results are supported by their implementations with Cid et al.'s established Small-AES version.
Expand
Bruce Kallick
ePrint Report ePrint Report
The classic simple substitution cipher is modified by randomly inserting key-defined noise characters into the ciphertext in encryption which are ignored in decryption. Interestingly, this yields a finite-key cipher system with unbounded unicity.
Expand
Steven D. Galbraith, Lukas Zobernig
ePrint Report ePrint Report
We consider the problem of obfuscating programs for fuzzy matching (in other words, testing whether the Hamming distance between an $n$-bit input and a fixed $n$-bit target vector is smaller than some predetermined threshold). This problem arises in biometric matching and other contexts. We present a virtual-black-box (VBB) secure and input-hiding obfuscator for fuzzy matching for Hamming distance, based on certain natural number-theoretic computational assumptions. In contrast to schemes based on coding theory, our obfuscator is based on computational hardness rather than information-theoretic hardness, and can be implemented for a much wider range of parameters. The Hamming distance obfuscator can also be applied to obfuscation of matching under the $\ell_1$ norm on $\Z^n$.

We also consider obfuscating conjunctions. Conjunctions are equivalent to pattern matching with wildcards, which can be reduced in some cases to fuzzy matching. Our approach does not cover as general a range of parameters as other solutions, but it is much more compact. We study the relation between our obfuscation schemes and other obfuscators and give some advantages of our solution.
Expand
◄ Previous Next ►