International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

27 February 2024

Valerie Gilchrist, Laurane Marco, Christophe Petit, Gang Tang
ePrint Report ePrint Report
The Tensor Isomorphism Problem (TIP) has been shown to be equivalent to the matrix code equivalence problem, making it an interesting candidate on which to build post-quantum cryptographic primitives. These hard problems have already been used in protocol development. One of these, MEDS, is currently in Round 1 of NIST's call for additional post-quantum digital signatures. In this work, we consider the TIP for a special class of tensors. The hardness of the decisional version of this problem is the foundation of a commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (Asiacrypt 2023). We present polynomial-time algorithms for the decisional and computational versions of TIP for special orbits, which implies that the commitment scheme is not secure. The key observations of these algorithms are that these special tensors contain some low-rank points, and their stabilizer groups are not trivial. With these new developments in the security of TIP in mind, we give a new commitment scheme based on the general TIP that is non-interactive, post-quantum, and statistically binding, making no new assumptions. Such a commitment scheme does not currently exist in the literature.
Expand
Khai Hanh Tang, Minh Pham, Chan Nam Ngo
ePrint Report ePrint Report
Incremental Verifiable Computation (IVC) allows a prover to prove to a verifier the correct execution of a sequential computation. Recent works focus on improving the universality and efficiency of IVC Schemes, which can be categorized into Accumulation and Folding-based IVCs with Folding-based ones being more efficient (due to their deferred proof generation until the final step). Unfortunately, both approaches satisfy only heuristic security as they model the Random Oracle (RO) as a circuit in their non-constant depth recursive composition of the base Scheme. Such drawback is two-fold: to connect the consecutive execution step the RO is recursively modeled as a circuit during the folding or the accumulating process, and again in the final SNARK wrapper circuit (a common practice in Folding-based IVCs).

We revisit this problem, with a focus on the Folding-based IVCs due to their efficiency, and propose the detachment of RO invocation from the folding circuit. We can instead accumulate such invocations, yielding the so-called Conditional Folding (CF) Scheme to overcome the first drawback. One can consider our CF Scheme a hybrid Folding-Accumulation Scheme with provable security. We provide a non-trivial practical construction for our CF scheme that is natively parallelizable, which offers great efficiency. We rigorously prove the security of our CF scheme (also for the case of folding in parallel; and our scheme can be made non-interactive using Fiat-Shamir). Our CF scheme is generic and does not require trusted setup. It can be adapted to construct the first IVC for RAM programs, i.e. Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs that we dub RAMenPaSTA, that can be used to build zero-knowledge virtual machines (zkVMs). Both our CF Scheme and RAMenPaSTA can be of independent research interests.
Expand
Naresh Goud Boddu, Vipul Goyal, Rahul Jain, João Ribeiro
ePrint Report ePrint Report
Non-malleable codes are fundamental objects at the intersection of cryptography and coding theory. These codes provide security guarantees even in settings where error correction and detection are impossible, and have found applications to several other cryptographic tasks. One of the strongest and most well-studied adversarial tampering models is $2$-split-state tampering. Here, a codeword is split into two parts which are stored in physically distant servers, and the adversary can then independently tamper with each part using arbitrary functions. This model can be naturally extended to the secret sharing setting with several parties by having the adversary independently tamper with each share. Previous works on non-malleable coding and secret sharing in the split-state tampering model only considered the encoding of classical messages. Furthermore, until recent work by Aggarwal, Boddu, and Jain (IEEE Trans. Inf. Theory 2024 & arXiv 2022), adversaries with quantum capabilities and shared entanglement had not been considered, and it is a priori not clear whether previous schemes remain secure in this model.

In this work, we introduce the notions of split-state non-malleable codes and secret sharing schemes for quantum messages secure against quantum adversaries with shared entanglement. Then, we present explicit constructions of such schemes that achieve low-error non-malleability. More precisely, we construct efficiently encodable and decodable split-state non-malleable codes and secret sharing schemes for quantum messages preserving entanglement with external systems and achieving security against quantum adversaries having shared entanglement with codeword length $n$, any message length at most $n^{\Omega(1)}$, and error $\varepsilon=2^{-{n^{\Omega(1)}}}$. In the easier setting of average-case non-malleability, we achieve efficient non-malleable coding with rate close to $1/11$.
Expand
Jeremiah Blocki, Blake Holman, Seunghoon Lee
ePrint Report ePrint Report
The (parallel) classical black pebbling game is a helpful abstraction which allows us to analyze the resources (time, space, space-time, cumulative space) necessary to evaluate a function $f$ with a static data-dependency graph $G$ on a (parallel) computer. In particular, the parallel black pebbling game has been used as a tool to quantify the (in)security of Data-Independent Memory-Hard Functions (iMHFs). Recently Blocki et al. (TCC 2022) introduced the parallel reversible pebbling game as a tool to analyze resource requirements when we additionally require that computation is reversible. Intuitively, the parallel reversible pebbling game extends the classical parallel black pebbling game by imposing restrictions on when pebbles can be removed. By contrast, the classical black pebbling game imposes no restrictions on when pebbles can be removed to free up space. One of the primary motivations of the parallel reversible pebbling game is to provide a tool to analyze the full cost of quantum preimage attacks against an iMHF. However, while there is an extensive line of work analyzing pebbling complexity in the (parallel) black pebbling game, comparatively little is known about the parallel reversible pebbling game. Our first result is a lower bound of $\Omega\left(N^{1+1/\sqrt{\log N}} \right)$ on the reversible cumulative pebbling cost for a line graph on $N$ nodes. This yields a separation between classical and reversible pebbling costs demonstrating that the reversibility constraint can increase cumulative pebbling costs (and space-time costs) by a multiplicative factor of $\Omega\left(N^{1/\sqrt{\log N}} \right)$ --- the classical pebbling cost (space-time or cumulative) for a line graph is just $\mathcal{O}(N)$. On the positive side, we prove that any classical parallel pebbling can be transformed into a reversible pebbling strategy whilst increasing space-time (resp. cumulative memory) costs by a multiplicative factor of at most $\mathcal{O}\left(N^{2/\sqrt{\log N}}\right)$ (resp. $\mathcal{O}\left(N^{\mathcal{O}(1)/\sqrt[4]{\log N}}\right)$). We also analyze the impact of the reversibility constraint on the cumulative pebbling cost of depth-robust and depth-reducible DAGs exploiting reversibility to improve constant factors in a prior lower bound of Alwen et al. (EUROCRYPT 2017). For depth-reducible DAGs we show that the state-of-the-art recursive pebbling techniques of Alwen et al. (EUROCRYPT 2017) can be converted into a recursive reversible pebbling attack without any asymptotic increases in pebbling costs. Finally, we extend a result of Blocki et al. (ITCS 2020) to show that it is Unique Games hard to approximate the reversible cumulative pebbling cost of a DAG $G$ to within any constant factor.
Expand
Pierre Briaud, Maxime Bros, Ray Perlner, Daniel Smith-Tone
ePrint Report ePrint Report
DME is a multivariate scheme submitted to the call for additional signatures recently launched by NIST. Its performance is one of the best among all the candidates. The public key is constructed from the alternation of very structured linear and non-linear components that constitute the private key, the latter being defined over an extension field. We exploit these structures by proposing an algebraic attack which is practical on all DME parameters.
Expand
Yuval Ishai, Yifan Song
ePrint Report ePrint Report
A leakage-resilient circuit for $f:\{0,1\}^n\to\{0,1\}^m$ is a randomized Boolean circuit $C$ mapping a randomized encoding of an input $x$ to an encoding of $y=f(x)$, such that applying any leakage function $L\in \cal L$ to the wires of $C$ reveals essentially nothing about $x$. A leakage-tolerant circuit achieves the stronger guarantee that even when $x$ and $y$ are not protected by any encoding, the output of $L$ can be simulated by applying some $L'\in \cal L$ to $x$ and $y$ alone. Thus, $C$ is as secure as an ideal hardware implementation of $f$ with respect to leakage from $\cal L$.

Leakage-resilient circuits were constructed for low-complexity classes $\cal L$, including (length-$t$ output) $\mathcal{AC}0$ functions, parities, and functions with bounded communication complexity. In contrast, leakage-tolerant circuits were only known for the simple case of probing leakage, where $L$ outputs the values of $t$ wires in $C$.

We initiate a systematic study of leakage-tolerant circuits for natural classes $\cal L$ of global leakage functions, obtaining the following main results.

Leakage-tolerant circuits for depth-1 leakage. Every circuit $C_f$ for $f$ can be efficiently compiled into an $\cal L$-tolerant circuit $C$ for $f$, where $\cal L$ includes all leakage functions $L$ that output either $t$ parities or $t$ disjunctions (alternatively, conjunctions) of any number of wires or their negations. In the case of parities, our simulator runs in $2^{O(t)}$ time. We provide partial evidence that this may be inherent.

Application to stateful leakage-resilient circuits. Using a general transformation from leakage-tolerant circuits, we obtain the first construction of stateful $t$-leakage-resilient circuits that tolerate a continuous parity leakage, and the first such construction for disjunction/conjunction leakage in which the circuit size grows sub-quadratically with $t$. Interestingly, here we can obtain $\mathtt{poly}(t)$-time simulation even in the case of parities.
Expand
Maryam Bahrani, Pranav Garimidi, Tim Roughgarden
ePrint Report ePrint Report
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with passive block producers that are motivated purely by the net rewards earned at the consensus layer. This paper introduces a model of active block producers that have their own private valuations for blocks (representing, for example, additional value derived from the application layer). The block producer surplus in our model can be interpreted as one of the more common colloquial meanings of the phrase ``maximal extractable value (MEV).''

We first prove that transaction fee mechanism design is fundamentally more difficult with active block producers than with passive ones: With active block producers, no non-trivial or approximately welfare maximizing transaction fee mechanism can be incentive-compatible for both users and block producers. These impossibility results can be interpreted as a mathematical justification for augmenting transaction fee mechanisms with additional components such as orderflow auctions, block producer competition, trusted hardware, or cryptographic techniques.

We then proceed to a more fine-grained model of block production that is inspired by current practice, in which we distinguish the roles of ``searchers'' (who actively identify opportunities for value extraction from the application layer and compete for the right to take advantage of them) and ``proposers'' (who participate directly in the blockchain protocol and make the final choice of the published block). Searchers can effectively act as an ``MEV oracle'' for a transaction fee mechanism, thereby enlarging the design space. Here, we first consider a transaction fee mechanism that resembles how searchers have traditionally been incorporated into the block production process, with each transaction effectively sold off to a searcher through a first-price auction. We then explore the design space with searchers more generally, and design a mechanism that circumvents our impossibility results for mechanisms without searchers. Our mechanism (the ``SAKA'' mechanism) is deterministic, incentive-compatible (for users, searchers, and the block producer), and sybil-proof, and it guarantees roughly 50% of the maximum-possible welfare when transaction sizes are small relative to block sizes. We conclude with a matching negative result: even when transactions are small relative to blocks, no incentive-compatible, sybil proof, and deterministic transaction fee mechanism can guarantee more than 50% of the maximum-possible welfare.
Expand
Aron van Baarsen, Sihang Pu
ePrint Report ePrint Report
Traditional private set intersection (PSI) involves a receiver and a sender holding sets $X$ and $Y$, respectively, with the receiver learning only the intersection $X\cap Y$. We turn our attention to its fuzzy variant, where the receiver holds \(|X|\) hyperballs of radius \(\delta\) in a metric space and the sender has $|Y|$ points. Representing the hyperballs by their center, the receiver learns the points $x\in X$ for which there exists $y\in Y$ such that $\mathsf{dist}(x,y)\leq \delta$ with respect to some distance metric. Previous approaches either require general-purpose multi-party computation (MPC) techniques like garbled circuits or fully homomorphic encryption (FHE), leak details about the sender’s precise inputs, support limited distance metrics, or scale poorly with the hyperballs' volume.

This work presents the first black-box construction for fuzzy PSI (including other variants such as PSI cardinality, labeled PSI, and circuit PSI), which can handle polynomially large radius and dimension (i.e., a potentially exponentially large volume) in two interaction messages, supporting general \(L_{p\in[1,\infty]}\) distance, without relying on garbled circuits or FHE. The protocol excels in both asymptotic and concrete efficiency compared to existing works. For security, we solely rely on the assumption that the Decisional Diffie-Hellman (DDH) holds in the random oracle model.
Expand
Houda Ferradi
ePrint Report ePrint Report
This paper introduces \textsl{signature validation}, a primitive allowing any \underline{t}hird party $T$ (\underline{T}héodore) to verify that a \underline{v}erifier $V$ (\underline{V}adim) computationally verified a signature $s$ on a message $m$ issued by a \underline{s}igner $S$ (\underline{S}arah).

A naive solution consists in sending by Sarah $x=\{m,\sigma_s\}$ where $\sigma_s$ is Sarah's signature on $m$ and have Vadim confirm reception by a signature $\sigma_v$ on $x$.

Unfortunately, this only attests \textsl{proper reception} by Vadim, i.e. that Vadim \textsl{could have checked} $x$ and not that Vadim \textsl{actually verified} $x$. By ``actually verifying'' we mean providing a proof or a convincing argument that a program running on Vadim's machine checked the correctness of $x$.

This paper proposes several solutions for doing so, thereby providing a useful building-block in numerous commercial and legal interactions for proving informed consent.
Expand
Cécile Delerablée, Lénaïck Gouriou, David Pointcheval
ePrint Report ePrint Report
Attribute-based cryptography allows fine-grained control on the use of the private key. In particular, attribute-based signature (ABS) specifies the capabilities of the signer, which can only sign messages associated to a policy that is authorized by his set of attributes. Furthermore, we can expect signature to not leak any information about the identity of the signer. ABS is a useful tool for identity-preserving authentication process which requires granular access-control, and can furthermore be enhanced with additional properties, for example delegation where users are able to manage a set of keys derived from their original one.

In this paper, we address delegation of signing keys. Our first delegation works for any subset of the original attributes, which is the intuitive approach of delegation. Furthermore, we also provide another kind of delegation where the delegator can choose a policy at delegation time to produce keys that can sign any message under this specific policy. This last approach to delegation is a direct application of a new version of the indexing technique, which was first introduced by Okamoto and Takashima in order to prove adaptive security in ABS and its counterpart for encryption, ABE. On top of that, we prove that our scheme is compatible with a well studied feature of ABS, traceability, by using an approach based on Linearly-Homomorphic signatures. All our schemes also guarantee the anonymity of the real signer. The unforgeability of our schemes is proven using the SXDH assumption, and our constructions use the Dual Pairing Vector Spaces (DPVS) framework developed by Okamoto and Takashima, which has been widely used for all kind of attribute and functional cryptography mechanisms.
Expand
Ziqi Zhu, Jiangtao Li, Kai Zhang, Junqing Gong, Haifeng Qian
ePrint Report ePrint Report
This work initiates the study of concrete registered functional encryption (Reg-FE) beyond ``all-or-nothing'' functionalities:

- We build the first Reg-FE for linear function or inner-product evaluation (Reg-IPFE) from pairings. The scheme achieves adaptive IND-security under $k$-Lin assumption in the prime-order bilinear group. A minor modification yields the first Registered Inner-Product Encryption (Reg-IPE) scheme from $k$-Lin assumption. Prior work achieves the same security in the generic group model. -We build the first Reg-FE for quadratic function (Reg-QFE) from pairings. The scheme achieves very selective simulation-based security (SIM-security) under bilateral $k$-Lin assumption in the prime-order bilinear group. Here, ``very selective'' means that the adversary claims challenge messages, all quadratic functions to be registered and all corrupted users at the beginning.

Besides focusing on the compactness of the master public key and helper keys, we also aim for compact ciphertexts in Reg-FE. Let $L$ be the number of slots and $n$ be the input size. Our first Reg-IPFE has weakly compact ciphertexts of size $O(n\cdot\log L)$ while our second Reg-QFE has compact ciphertexts of size $O(n+\log L)$. Technically, for our first Reg-IPFE, we employ nested dual-system method within the context of Reg-IPFE; for our second Reg-QFE, we follow Wee's ``IPFE-to-QFE'' transformation [TCC' 20] but devise a set of new techniques that make our pairing-based Reg-IPFE compatible. Along the way, we introduce a new notion named Pre-Constrained Registered IPFE which generalizes slotted Reg-IPFE by constraining the form of functions that can be registered.
Expand
Nicolas Alhaddad, Mayank Varia, Ziling Yang
ePrint Report ePrint Report
Asynchronous complete secret sharing (ACSS) is a foundational primitive in the design of distributed algorithms and cryptosystems that require secrecy. Dual-threshold ACSS permits a dealer to distribute a secret to a collection of $n$ servers so that everyone holds shares of a polynomial containing the dealer's secret.

This work contributes a new ACSS protocol, called Haven++, that uses packing and batching to make asymptotic and concrete advances in the design and application of ACSS for large secrets. Haven++ allows the dealer to pack multiple secrets in a single sharing phase, and to reconstruct either one or all of them later. For even larger secrets, we contribute a batching technique to amortize the cost of proof generation and verification across multiple invocations of our protocol.

The result is an asymptotic improvement in amortized communication and computation complexity, both for ACSS itself and for its application to asynchronous distributed key generation. We implement Haven++ and find that it improves performance over the hbACSS protocol of Yurek et al. by a factor of 3-10$\times$ or more across a wide range of parameters for the number of parties and batch size.
Expand

26 February 2024

Benedikt Bünz, Jessica Chen
ePrint Report ePrint Report
We construct two new accumulation schemes. The first one is for checking that $\ell$ read and write operations were performed correctly from a memory of size $T$. Unlike all prior work, the prover time is entirely independent of $T$ and only depends on $\ell$. The second one is for deterministic computations. It does not require committing to the intermediate wires of the computation but only the input and output. This is achieved by building an accumulation scheme for a modified version of the famous GKR protocol. We show that these schemes are highly compatible and that the accumulation for GKR can further reduce the cost of the memory-checking scheme. Using the BCLMS (Crypto 21) compiler, these protocols yield an efficient incrementally verifiable computation (IVC) scheme that is particularly useful for machine computations with large memories and deterministic steps.
Expand
Jake Januzelli, Lawrence Roy, Jiayu Xu
ePrint Report ePrint Report
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to agree upon a cryptographic key, in the setting where the only secret shared in advance is a low-entropy password. The standard security notion for PAKE is in the Universal Composability (UC) framework. In recent years there have been a large number of works analyzing the UC-security of Encrypted Key Exchange (EKE), the very first PAKE protocol, and its One-encryption variant (OEKE), both of which compile an unauthenticated Key Agreement (KA) protocol into a PAKE.

In this work, we present a comprehensive and thorough study of the UC-security of both EKE and OEKE in the most general setting and using the most efficient building blocks:

1. We show that among the seven existing results on the UC-security of (O)EKE, six are flawed;

2. We show that for (O)EKE to be UC-secure, the underlying KA protocol needs to satisfy the properties of strong pseudorandomness, pseudorandom non-malleability, and collision resistance, all of which are missing in existing works;

3. We give UC-security proofs for EKE and OEKE using Programmable-Once Random Function (POPF), which is the most efficient instantiation to date and is around 4 times faster than the standard instantiation using Ideal Cipher (IC).

Our results in particular allow for PAKE constructions from post-quantum KA protocols such as Kyber. We also give a security analysis of POPF in a new composition framework called almost UC, which we believe is interesting in its own right.
Expand
Ruida Wang, Yundi Wen, Zhihao Li, Xianhui Lu, Benqiang Wei, Kun Liu, Kunpeng Wang
ePrint Report ePrint Report
We present a novel circuit bootstrapping algorithm that outperforms the state-of-the-art TFHE method with 9.9× speedup and 15.6× key size reduction. These improvements can be attributed to two technical contributions. Firstly, we redesigned the circuit bootstrapping workflow to operate exclusively under the ring ciphertext type, which eliminates the need of conversion between LWE and RLWE ciphertexts. Secondly, we improve the LMKC+ blind rotation algorithm by reducing the number of automorphisms, then propose the first automorphism type multi-value functional bootstrapping. These automorphism-based techniques lead to further key size optimization, and are of independent interest besides circuit bootstrapping. Based our new circuit bootstrapping we can evaluate AES-128 in 26.2s (single thread), achieving 10.3× speedup compared with the state-of-the-art TFHE-based approach.
Expand
Weixi Zheng, Liu Zhang, Zilong Wang
ePrint Report ePrint Report
At CRYPTO 2019, Gohr demonstrated that differential-neural distinguishers (DNDs) for Speck32/64 can learn more features than classical cryptanalysis's differential distribution tables (DDT). Furthermore, a non-classical key recovery procedure is devised by combining the Upper Confidence Bound (UCB) strategy and the BayesianKeySearch algorithm. Consequently, the time complexity of 11-round key recovery attacks on Speck32/64 is significantly reduced compared with the state-of-the-art results in classical cryptanalysis. This advancement in deep learning-assisted cryptanalysis has opened up new possibilities. However, the specific encryption features exploited by DNDs remain unclear.

In this paper, we begin by analyzing the features learned by DND based on the probability distribution of a ciphertext pair. Our analysis reveals that DND not only learns the differential features of the ciphertext pair but also captures the XOR information of the left and right branches of the ciphertext pair. This explains why the performance of DND can outperform DDT in certain cases. For other ciphers, we can also predict whether deep learning methods can achieve superior results to classical methods based on the probability distribution of the ciphertext pair. Next, we modify the input data format and network structure based on the specific features that can be learned to train DND specifically. With these modifications, it is possible to reduce the size of their parameters to only 1/16 of their previous networks while maintaining high precision. Additionally, the training time for the DNDs is significantly reduced. Finally, to improve the efficiency of deep learning-assisted cryptanalysis, we introduce Bayes-UCB to select promising ciphertext structures more efficiently. We also introduce an improved BayesianKeySearch algorithm to retain guessed keys with the highest scores in key guessing. We use both methods to launch 11-round, 12-round, and 13-round key recovery attacks on Speck32/64. The results show that under the same conditions, the success rate of 11-round key recovery attacks has increased from Gohr's 36.1% to 52.8%, the success rate of 12-round key recovery attacks has increased from Gohr's 39% to 50%, and the success rate of 13-round key recovery attacks has increased from Zhang et al.'s 21% to 24%. In addition, the time complexity of these experiments is also significantly reduced.
Expand
Vincent Hwang
ePrint Report ePrint Report
We show that there is a discrepancy between the emulated floating-point multiplications in the submission package of Falcon and the claimed behavior. In particular, we show that floating-point products with absolute values the smallest normal positive floating-point number are incorrectly zeroized. However, we show that the discrepancy doesn’t effect the complex fast Fourier transform by modeling the floating-point addition, subtraction, and multiplication in CryptoLine. We later implement our own floating-point multiplications in Armv7-M assembly and Jasmin and prove their equivalence with our model, demonstrating the possibility of transferring the challenging verification task (verifying highly-optimized assembly) to the presumably more readable code base (Jasmin).
Expand
Hanjun Li, Sela Navot, Stefano Tessaro
ePrint Report ePrint Report
This paper proposes POPSTAR, a new lightweight protocol for the private computation of heavy hitters, also known as a private threshold reporting system. In such a protocol, the users provide input measurements, and a report server learns which measurements appear more than a pre-specified threshold. POPSTAR follows the same architecture as STAR (Davidson et al, CCS 2022) by relying on a helper randomness server in addition to a main server computing the aggregate heavy hitter statistics. While STAR is extremely lightweight, it leaks a substantial amount of information, consisting of an entire histogram of the provided measurements (but only reveals the actual measurements that appear beyond the threshold). POPSTAR shows that this leakage can be reduced at a modest cost ($\sim$7$\times$ longer aggregation time). Our leakage is closer to that of Poplar (Boneh et al, S&P 2021), which relies however on distributed point functions and a different model which requires interactions of two non-colluding servers (with equal workloads) to compute the heavy hitters.
Expand
Vasyl Ustimenko
ePrint Report ePrint Report
We suggest the family of ciphers s^E^n, n=2,3,.... with the space of plaintexts (Z*_{2^s})^n, s >1 such that the encryption map is the composition of kind G=G_1A_1G_2A_2 where A_i are the affine transformations from AGL_n(Z_{2^s}) preserving the variety (Z*_{2^s)}^n , Eulerian endomorphism G_i , i=1,2 of K[x_1, x_2,...., x_n] moves x_i to monomial term ϻ(x_1)^{d(1)}(x_2)^{d(2)}...(x_n)^{d(n)} , ϻϵ Z*_{2^s} and act on (Z*_{2^s})^n as bijective transformations. The cipher is converted to a protocol supported cryptosystem. Protocols of Noncommutative Cryptography implemented on the platform of Eulerian endomorphism are used for the delivery of G_i and A_i from Alice to Bob. One can use twisted Diffie-Hellman protocols which security rests on the complexity of Conjugacy Power problem or hidden tame homomorphism protocol which security rests of the word decomposition problem. Instead of the delivery of G_i Alice and Bob can elaborate these transformations via the inverse twisted Diffie-Hellman protocol implemented on the platform of tame Eulerian transformations of (Z*_{2^s})^n. The cost of single protocol is O(n^3) and the cost of the computation of the reimage of used nonlinear map is O(n^2). So the verification of n^t , t≥1 signatures takes time O(n^{t+2}). Instead of inverse twisted Diffie-Hellman protocol correspondents can use inverse hidden tame homomorphism protocol which rests on the complexity of word decomposition for tame Eulerian transformations. We use natural bijections between Z*_{2^s} and Z_{2^{s-1}}, Z*_{2^s} and finite field F_{2^{s-1}} and Z*_{2^s} and Boolean ring B_{s-1} of order 2^{s-1} to modify the family of ciphers or cryptosystems via the change of AGL_n(Z_{2^s}) for the AGL_n(K), where K is one of the rings Z_{2^{s-1}, F_{2^{s-1} and B_{s-1}. New ciphers are defined via the multiplications of two different commutative rings Z_{2^s} and K. It does not allow to treat them as stream ciphers of multivariate cryptography and use corresponding cryptanalytic technique. Adversary is not able to use known cryptanalytical methods such as linearisation attacks. We discuss the option of change the mentioned above elements of AGL_n(Z_{2^s) or AGL_n(K) for nonlinear multivariate transformation F of (Z_{2^s})^n or K^n with the symmetric trapdoor accelerator T, i.e. the piece of information such that the knowledge of T allows to compute the value F(p) in arbitrarily chosen p ϵ P in time O(n^2) and to solve the equation of kind F(x)=c for each c from C in time O(n ^2).
Expand
Alexander Hoover, Sarvar Patel, Giuseppe Persiano, Kevin Yeo
ePrint Report ePrint Report
We study single-server private information retrieval (PIR) where a client wishes to privately retrieve the $x$-th entry from a database held by a server without revealing the index $x$. In our work, we focus on PIR with client pre-processing where the client may compute hints during an offline phase. The hints are then leveraged during queries to obtain sub-linear online time. We present Plinko that is the first single-server PIR with client pre-processing that obtains optimal trade-offs between client storage and query time for all parameters. Our scheme uses $t = \tilde{O}(n/r)$ query time for any client storage size $r$. This matches known lower bounds of $r \cdot t = \Omega(n)$ up to logarithmic factors for all parameterizations whereas prior works could only match the lower bound when $r = \tilde{O}(\sqrt{n})$. Moreover, Plinko is also the first updateable PIR scheme where an entry can be updated in worst-case $\tilde{O}(1)$ time.

As our main technical tool, we define the notion of an invertible pseudorandom function (iPRF) that generalizes standard PRFs to be equipped with an efficient inversion algorithm. We present a construction of an iPRF from one-way functions where forward evaluation runs in $\tilde{O}(1)$ time and inversion runs in time linear in the inverse set (output) size. Furthermore, our iPRF construction is the first that remains efficient and secure for arbitrary domain and range sizes (including small domains and ranges). In the context of single-server PIR, we show that iPRFs may be used to construct the first hint set representation where finding a hint containing an entry $x$ may be done in $\tilde{O}(1)$ time.
Expand
◄ Previous Next ►