International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Jiang Zhang

ORCID: 0000-0002-4787-0316

Publications

Year
Venue
Title
2023
EUROCRYPT
Half-Tree: Halving the Cost of Tree Expansion in COT and DPF
GGM tree is widely used in designing correlated oblivious transfer (COT), subfield vector oblivious linear evaluation (sVOLE), distributed point function (DPF), and distributed comparison function (DCF). Often, the computation and communication cost associated with GGM tree is the major cost in these protocols. In this paper, we propose a suite of optimizations that can reduce this cost by half. - Halving the cost of COT and sVOLE. Our basic protocol introduces extra correlation in each level of a GGM tree used by the state-of-the-art COT protocol. As a result, it reduces the number of AES calls and the communication by half. Extending the idea to sVOLE, we are able to achieve similar improvement with either halved computation or halved communication. - Halving the cost of DPF and DCF. We propose improved two-party protocols for distributed generation of DPF/DCF keys. Our tree structures behind these protocols lead to more efficient full-domain evaluation and halve the communication and the round complexity of the state-of-the-art DPF/DCF protocols. All protocols are provably secure in the random-permutation model and can be accelerated based on fixed-key AES-NI. We also improve the state-of-the-art schemes of puncturable pseudorandom function (PPRF), DPF, and DCF, which are of independent interest in dealer-available scenarios.
2023
CRYPTO
Fast Blind Rotation for Bootstrapping FHEs
Blind rotation is one of the key techniques to construct fully homomorphic encryptions with the best known bootstrapping algorithms running in less than one second. Currently, the two main approaches, namely, AP and GINX, for realizing blind rotation are first introduced by Alperin-Sheriff and Peikert (CRYPTO 2014) and Gama, Izabachene, Nguyen and Xie (EUROCRYPT 2016), respectively. In this paper, we propose a new blind rotation algorithm based on a GSW-like encryption from the NTRU assumption. Our algorithm has performance asymptotically independent from the key distributions, and outperforms AP and GINX in both the evaluation key size and the computational efficiency (especially for large key distributions). By using our blind rotation algorithm as a building block, we present new bootstrapping algorithms for both LWE and RLWE ciphertexts. We implement our bootstrapping algorithm for LWE ciphertexts, and compare the actual performance with two bootstrapping algorithms, namely, FHEW/AP by Ducas and Micciancio (EUROCRYPT 2015) and TFHE/GINX by Chillotti, Gama, Georgieva and Izabach\`ene (Journal of Cryptology 2020), that were implemented in the OpenFHE library. For parameters with ternary key distribution at 128-bit security, our bootstrapping only needs to store evaluation key of size 18.65MB for blind rotation, which is about 89.8 times smaller than FHEW/AP and 2.9 times smaller than TFHE/GINX. Moreover, our bootstrapping can be done in 112ms on a laptop, which is about 3.2 times faster than FHEW/AP and 2.1 times faster than TFHE/GINX. More improvements are available for large key distributions such as Gaussian distributions.
2023
ASIACRYPT
NEV: Faster and Smaller NTRU Encryption using Vector Decoding
Jiang Zhang Dengguo Feng Di Yan
In this paper, we present $\nev$ -- a faster and smaller NTRU Encryption using Vector decoding, which is provably IND-CPA secure in the standard model under the decisional NTRU and RLWE assumptions over the cyclotomic ring $R_q = \ZZ_q[X]/(X^n+1)$. Our main technique is a novel and non-trivial way to integrate a previously known plaintext encoding and decoding mechanism into the provably IND-CPA secure NTRU variant by Stehl\'e and Steinfeld (Eurocrypt 2011). Unlike the original NTRU encryption and its variants which encode the plaintext into the least significant bits of the coefficients of a message polynomial, we encode each plaintext bit into the most significant bits of multiple coefficients of the message polynomial, so that we can use a vector of noised coefficients to decode each plaintext bit in decryption, and significantly reduce the size of $q$ with a reasonably negligible decryption failure. Concretely, we can use $q = 769$ to obtain public keys and ciphertexts of 615 bytes with decryption failure $\leq 2^{-138}$ at NIST level 1 security, and 1229 bytes with decryption failure $\leq 2^{-152}$ at NIST level 5 security. By applying the Fujisaki-Okamoto transformation in a standard way, we obtain an IND-CCA secure KEM from our basic PKE scheme. Compared to NTRU and Kyber in the NIST Round 3 finalists at the same security levels, our KEM is 33-48\% more compact and 5.03-29.94X faster than NTRU in the round-trip time of ephemeral key exchange, and is 21\% more compact and 1.42-1.74X faster than Kyber. \qquad We also give an optimized encryption scheme $\nev'$ with better noise tolerance (and slightly better efficiency) based on a variant of the RLWE problem, called Subset-Sum Parity RLWE problem, which we show is polynomially equivalent to the standard decisional RLWE problem (with different parameters), and maybe of independent interest.
2023
ASIACRYPT
Blockwise Rank Decoding Problem and LRPC Codes: Cryptosystems with Smaller Sizes
In this paper, we initiate the study of the Rank Decoding (RD) problem and LRPC codes with blockwise structures in rank-based cryptosystems. First, we introduce the blockwise errors ($\ell$-errors) where each error consists of $\ell$ blocks of coordinates with disjoint supports, and define the blockwise RD ($\ell$-RD) problem as a natural generalization of the RD problem whose solutions are $\ell$-errors (note that the standard RD problem is actually a special $\ell$-RD problem with $\ell=1$). We adapt the typical attacks on the RD problem to the $\ell$-RD problem, and find that the blockwise structures do not ease the problem too much: the $\ell$-RD problem is still exponentially hard for appropriate choices of $\ell>1$. Second, we introduce blockwise LRPC ($\ell$-LRPC) codes as generalizations of the standard LPRC codes whose parity-check matrices can be divided into $\ell$ sub-matrices with disjoint supports, i.e., the intersection of two subspaces generated by the entries of any two sub-matrices is a null space, and investigate the decoding algorithms for $\ell$-errors. We find that the gain of using $\ell$-errors in decoding capacity outweighs the complexity loss in solving the $\ell$-RD problem, which makes it possible to design more efficient rank-based cryptosystems with flexible choices of parameters. As an application, we show that the two rank-based cryptosystems submitted to the NIST PQC competition, namely, RQC and ROLLO, can be greatly improved by using the ideal variants of the $\ell$-RD problem and $\ell$-LRPC codes. Concretely, for 128-bit security, our RQC has total public key and ciphertext sizes of 2.5 KB, which is not only about 50\% more compact than the original RQC, but also smaller than the NIST Round 4 code-based submissions HQC, BIKE, and Classic McEliece.
2023
JOFC
Lattice-Based Programmable Hash Functions and Applications
Jiang Zhang Yu Chen Zhenfeng Zhang
Driven by the open problem raised by Hofheinz and Kiltz (J Cryptol 25(3):484–527, 2012), we study the formalization of lattice-based programmable hash function (PHF) and give three types of concrete constructions by using several techniques such as a novel combination of cover-free sets and lattice trapdoors. Under the inhomogeneous small integer solution (ISIS) assumption, we show that any (non-trivial) lattice-based PHF is a collision-resistant hash function, which gives a direct application of this new primitive. We further demonstrate the power of lattice-based PHF by giving generic constructions of signature and identity-based encryption (IBE) in the standard model, which not only provide a way to unify several previous lattice-based schemes using the partitioning proof techniques, but also allow us to obtain new short signature schemes and IBE schemes from (ideal) lattices. Specifically, by instantiating the generic constructions with our Type-II and Type-III PHF constructions, we immediately obtain two short signatures and two IBE schemes with asymptotically much shorter keys. A major downside which inherits from our Type-II and Type-III PHF constructions is that we can only prove the security of the new signatures and IBEs in the bounded security model that the number Q of the adversary’s queries is required to be known in advance. Another downside is that the computational time of our new signatures and IBEs is a linear function of Q , which is large for typical parameters. To overcome the above limitations, we also give a refined way of using Type-II and Type-III PHFs to construct lattice-based short signatures with short verification keys in the full security model. In particular, our methods depart from the confined guessing technique of Böhl et al. (Eurocrypt’13) that was used to construct previous standard model short signature schemes with short verification keys by Ducas and Micciancio (Crypto’14) and by Alperin-Sheriff (PKC’15) and allow us to achieve much tighter security from weaker hardness assumptions.
2022
JOFC
Non-Malleable Functions and their Applications
We formally study “non-malleable functions” (NMFs), a general cryptographic primitive which simplifies and relaxes “non-malleable one-way/hash functions” (NMOWHFs) introduced by Boldyreva et al. (in: Advances in cryptology—ASIACRYPT 2009, pp 524–541, 2009) and refined by Baecher et al. (in: CT-RSA 2011, pp 268–283, 2011). NMFs focus on basic functions, rather than one-way/hash functions considered in the literature of NMOWHFs. We formalize a game-based definition for NMFs. Roughly, a function f is non-malleable if given an image $$y^* \leftarrow f(x^*)$$ y ∗ ← f ( x ∗ ) for a randomly chosen $$x^*$$ x ∗ , it is hard to output a value y together with a transformation $$\phi $$ ϕ from some prefixed transformation class such that y is an image of $$\phi (x^*)$$ ϕ ( x ∗ ) under f . Our non-malleable notion is strong in the sense that only trivial copy solution $$(\mathsf {id}, y^*)$$ ( id , y ∗ ) is forbidden, where $$\mathsf {id}$$ id is the identity transformation. We also consider the adaptive notion, which stipulates that non-malleability holds even when an inversion oracle is available. We investigate the relations between non-malleability and one-wayness in depth. In the non-adaptive setting, we show that non-malleability generally implies one-wayness for poly-to-one functions but not vice versa. In the adaptive setting, we show that for most algebra-induced transformation classes, adaptive non-malleability (ANM) is equivalent to adaptive one-wayness (AOW) for injective functions. These results establish theoretical connections between non-malleability and one-wayness for functions and extend to trapdoor functions as well, resolving the open problems left by Kiltz et al. (in: Advances in cryptology—EUROCRYPT 2010, pp 673–692, 2010). We also study the relations between standard OW/NM and hinting OW/NM, where the latter notions are typically more useful in practice. Toward efficient realizations of NMFs, we give a deterministic construction from adaptive trapdoor functions as well as a randomized construction from all-but-one lossy functions and one-time signature. This partially solves an open problem posed by Boldyreva et al. (in: Advances in cryptology—ASIACRYPT 2009, pp 524–541, 2009). Finally, we explore applications of NMFs in security against related-key attacks (RKA). We first show that, somewhat surprisingly, the implication AOW $$\Rightarrow $$ ⇒ ANM sheds light on addressing non-trivial copy attacks in RKA security. We then show that NMFs give rise to a generic construction of RKA-secure authenticated key derivation functions, which have proven to be very useful in achieving RKA security for numerous cryptographic primitives. Particularly, our construction simplifies and unifies the result due to Qin et al. (in: Public-key cryptography—PKC 2015, volume 9020 of LNCS. Springer, Berlin, pp 557–578, 2015).
2021
CRYPTO
Pushing the Limits of Valiant's Universal Circuits: Simpler, Tighter and More Compact 📺
A universal circuit (UC) is a general-purpose circuit that can simulate arbitrary circuits (up to a certain size $n$). Valiant provides a $k$-way recursive construction of UCs (STOC 1976), where $k$ tunes the complexity of the recursion. More concretely, Valiant gives theoretical constructions of 2-way and 4-way UCs of asymptotic (multiplicative) sizes $5n\log n$ and $4.75 n\log n$ respectively, which matches the asymptotic lower bound $\Omega(n\log n)$ up to some constant factor. Motivated by various privacy-preserving cryptographic applications, Kiss et al. (Eurocrypt 2016) validated the practicality of $2$-way universal circuits by giving example implementations for private function evaluation. G{\"{u}}nther et al. (Asiacrypt 2017) and Alhassan et al. (J. Cryptology 2020) implemented the 2-way/4-way hybrid UCs with various optimizations in place towards making universal circuits more practical. Zhao et al. (Asiacrypt 2019) optimized Valiant's 4-way UC to asymptotic size $4.5 n\log n$ and proved a lower bound $3.64 n\log n$ for UCs under the Valiant framework. As the scale of computation goes beyond 10-million-gate ($n=10^7$) or even billion-gate level ($n=10^9$), the constant factor in UCs size plays an increasingly important role in application performance. In this work, we investigate Valiant's universal circuits and present an improved framework for constructing universal circuits with the following advantages. [Simplicity.] Parameterization is no longer needed. In contrast to that previous implementations resorted to a hybrid construction combining $k=2$ and $k=4$ for a tradeoff between fine granularity and asymptotic size-efficiency, our construction gets the best of both worlds when configured at the lowest complexity (i.e., $k=2$). [Compactness.] Our universal circuits have asymptotic size $3n\log n$, improving upon the best previously known $4.5n\log n$ by 33\% and beating the $3.64n\log n$ lower bound for UCs constructed under Valiant's framework (Zhao et al., Asiacrypt 2019). [Tightness.] We show that under our new framework the UCs size is lower bounded by $2.95 n\log n$, which almost matches the $3n\log n$ circuit size of our $2$-way construction. We implement the 2-way universal circuits and evaluate its performance with other implementations, which confirms our theoretical analysis.
2021
CRYPTO
Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN 📺
Yu Yu Jiang Zhang
Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90's. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate $\frac{\log^2 n}{n}$ implies the quasi-polynomial hardness of LPN at the extremely high noise rate $1/2-1/\poly(n)$. It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness. We start with a simple observation that the hardness of high-noise LPN over large fields is implied by that of the LWE of the same modulus, and is thus reducible from worst-case hardness of lattice problems. We then revisit [BLVW19], which is the main focus of this work. We first expand the underlying binary linear codes (of the NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant $0<c<1$ the constant-noise LPN problem is ($T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{\min(c,1-c)})},q=2^{\Omega(n^{\min(c,1-c)})}$)-hard assuming that the NCP at the low-noise rate $\tau=n^{-c}$ is ($T'={2^{\Omega(\tau n)}}$, $\epsilon'={2^{-\Omega(\tau n)}}$,$m={2^{\Omega(\tau n)}}$)-hard in the worst case, where $T$, $\epsilon$, $q$ and $m$ are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions need constant-noise LPN with ($T={2^{\omega(\sqrt{n})}}$, $\epsilon'={2^{-\omega(\sqrt{n})}}$,$q={2^{\sqrt{n}}}$)-hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary $\omega(1)$ factor in the exponent) what is reducible from the worst-case NCP when $c= 0.5$. We leave it as an open problem whether the gap can be closed or there is a separation in place.
2020
PKC
Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices: KEMs and Signatures of Smaller Sizes 📺
Currently, lattice-based cryptosystems are less efficient than their number-theoretic counterparts (based on RSA, discrete logarithm, etc.) in terms of key and ciphertext (signature) sizes. For adequate security the former typically needs thousands of bytes while in contrast the latter only requires at most hundreds of bytes. This significant difference has become one of the main concerns in replacing currently deployed public-key cryptosystems with lattice-based ones. Observing the inherent asymmetries in existing lattice-based cryptosystems, we propose asymmetric variants of the (module-)LWE and (module-)SIS assumptions, which yield further size-optimized KEM and signature schemes than those from standard counterparts. Following the framework of Lindner and Peikert (CT-RSA 2011) and the Crystals-Kyber proposal (EuroS&P 2018), we propose an IND-CCA secure KEM scheme from the hardness of the asymmetric module-LWE (AMLWE), whose asymmetry is fully exploited to obtain shorter public keys and ciphertexts. To target at a 128-bit quantum security, the public key (resp., ciphertext) of our KEM only has 896 bytes (resp., 992 bytes). Our signature scheme bears most resemblance to and improves upon the Crystals-Dilithium scheme (ToCHES 2018). By making full use of the underlying asymmetric module-LWE and module-SIS assumptions and carefully selecting the parameters, we construct an SUF-CMA secure signature scheme with shorter public keys and signatures. For a 128-bit quantum security, the public key (resp., signature) of our signature scheme only has 1312 bytes (resp., 2445 bytes). We adapt the best known attacks and their variants to our AMLWE and AMSIS problems and conduct a comprehensive and thorough analysis of several parameter choices (aiming at different security strengths) and their impacts on the sizes, security and error probability of lattice-based cryptosystems. Our analysis demonstrates that AMLWE and AMSIS problems admit more flexible and size-efficient choices of parameters than the respective standard versions.
2019
ASIACRYPT
Valiant’s Universal Circuits Revisited: An Overall Improvement and a Lower Bound
A universal circuit (UC) is a general-purpose circuit that can simulate arbitrary circuits (up to a certain size n). At STOC 1976 Valiant presented a graph theoretic approach to the construction of UCs, where a UC is represented by an edge universal graph (EUG) and is recursively constructed using a dedicated graph object (referred to as supernode). As a main end result, Valiant constructed a 4-way supernode of size 19 and an EUG of size $$4.75n\log n$$ (omitting smaller terms), which remained the most size-efficient even to this day (after more than 4 decades).Motivated by the emerging applications of UCs in various privacy preserving computation scenarios, we revisit Valiant’s universal circuits, and propose a 4-way supernode of size 18, and an EUG of size $$4.5n\log n$$. As confirmed by our implementations, we reduce the size of universal circuits (and the number of AND gates) by more than 5% in general, and thus improve upon the efficiency of UC-based cryptographic applications accordingly. Our approach to the design of optimal supernodes is computer aided (rather than by hand as in previous works), which might be of independent interest. As a complement, we give lower bounds on the size of EUGs and UCs in Valiant’s framework, which significantly improves upon the generic lower bound on UC size and therefore reduces the gap between theory and practice of universal circuits.
2019
ASIACRYPT
Collision Resistant Hashing from Sub-exponential Learning Parity with Noise
The Learning Parity with Noise (LPN) problem has recently found many cryptographic applications such as authentication protocols, pseudorandom generators/functions and even asymmetric tasks including public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. It however remains a long-standing open problem whether LPN implies collision resistant hash (CRH) functions. Inspired by the recent work of Applebaum et al. (ITCS 2017), we introduce a general construction of CRH from LPN for various parameter choices. We show that, just to mention a few notable ones, under any of the following hardness assumptions (for the two most common variants of LPN) 1.constant-noise LPN is $$2^{n^{0.5+\varepsilon }}$$-hard for any constant $$\varepsilon >0$$;2.constant-noise LPN is $$2^{\varOmega (n/\log n)}$$-hard given $$q=\mathsf {poly}(n)$$ samples;3.low-noise LPN (of noise rate $$1/\sqrt{n}$$) is $$2^{\varOmega (\sqrt{n}/\log n)}$$-hard given $$q=\mathsf {poly}(n)$$ samples. there exists CRH functions with constant (or even poly-logarithmic) shrinkage, which can be implemented using polynomial-size depth-3 circuits with NOT, (unbounded fan-in) AND and XOR gates. Our technical route LPN $$\rightarrow $$ bSVP $$\rightarrow $$ CRH is reminiscent of the known reductions for the large-modulus analogue, i.e., LWE $$\rightarrow $$ SIS $$\rightarrow $$ CRH, where the binary Shortest Vector Problem (bSVP) was recently introduced by Applebaum et al. (ITCS 2017) that enables CRH in a similar manner to Ajtai’s CRH functions based on the Short Integer Solution (SIS) problem.Furthermore, under additional (arguably minimal) idealized assumptions such as small-domain random functions or random permutations (that trivially imply collision resistance), we still salvage a simple and elegant collision-resistance-preserving domain extender combining the best of the two worlds, namely, maximized (depth one) parallelizability and polynomial shrinkage. In particular, assume $$2^{n^{0.5+\varepsilon }}$$-hard constant-noise LPN or $$2^{n^{0.25+\varepsilon }}$$-hard low-noise LPN, we obtain a collision resistant hash function that evaluates in parallel only a single layer of small-domain random functions (or random permutations) and shrinks polynomially.
2017
ASIACRYPT
2016
CRYPTO
2016
CRYPTO
2016
PKC
2015
PKC
2015
EUROCRYPT
2014
ASIACRYPT
1999
ASIACRYPT

Program Committees

Asiacrypt 2022
Asiacrypt 2017