IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
10 November 2020
Kyoohyung Han, Jinhyuck Jeong, Jung Hoon Sohn, Yongha Son
ePrint Report
Recently, privacy-preserving logistic regression techniques on distributed data among several data owners drew attention in terms of their applicability in federated learning environment. Many of them have been built upon cryptographic primitives such as secure multiparty computations(MPC) and homomorphic encryptions(HE) to protect the privacy of data. The secure multiparty computation provides fast and secure unit operations for arithmetic and bit operations but they often does not scale with large data well enough due to large computation cost and communication overhead. From recent works, many HE primitives provide their operations in a batch sense so that the technique can be an appropriate choice in a big data environment. However computationally expensive operations such as ciphertext slot rotation or refreshment(so called bootstrapping) and large public key size are hurdles that hamper widespread of the technique in the industry-level environment.
In this paper, we provide a new hybrid approach of a privacy-preserving logistic regression training and a inference, which utilizes both MPC and HE techniques to provide efficient and scalable solution while minimizing needs of key management and complexity of computation in encrypted state. Utilizing batch sense properties of HE, we present a method to securely compute multiplications of vectors and matrices using one HE multiplication, compared to the naive approach which requires linear number of multiplications regarding to the size of input data. We also show how we used a 2-party additive secret sharing scheme to control noises of expensive HE operations such as bootstrapping efficiently.
In this paper, we provide a new hybrid approach of a privacy-preserving logistic regression training and a inference, which utilizes both MPC and HE techniques to provide efficient and scalable solution while minimizing needs of key management and complexity of computation in encrypted state. Utilizing batch sense properties of HE, we present a method to securely compute multiplications of vectors and matrices using one HE multiplication, compared to the naive approach which requires linear number of multiplications regarding to the size of input data. We also show how we used a 2-party additive secret sharing scheme to control noises of expensive HE operations such as bootstrapping efficiently.
Amit Agarwal, James Bartusek, Vipul Goyal, Dakshita Khurana, Giulio Malavolta
ePrint Report
We initiate the study of multi-party computation for classical functionalities (in the plain model) with security against malicious polynomial-time quantum adversaries. We observe that existing techniques readily give a polynomial-round protocol, but our main result is a construction of constant-round post-quantum multi-party computation. We assume mildly super-polynomial quantum hardness of learning with errors (LWE), and polynomial quantum hardness of an LWE-based circular security assumption. Along the way, we develop the following cryptographic primitives that may be of independent interest:
- A spooky encryption scheme for relations computable by quantum circuits, from the quantum hardness of an LWE-based circular security assumption. This yields the first quantum multi-key fully-homomorphic encryption scheme with classical keys. - Constant-round zero-knowledge secure against multiple parallel quantum verifiers from spooky encryption for relations computable by quantum circuits. To enable this, we develop a new straight-line non-black-box simulation technique against parallel verifiers that does not clone the adversary's state. This forms the heart of our technical contribution and may also be relevant to the classical setting. - A constant-round post-quantum non-malleable commitment scheme, from the mildly super-polynomial quantum hardness of LWE.
- A spooky encryption scheme for relations computable by quantum circuits, from the quantum hardness of an LWE-based circular security assumption. This yields the first quantum multi-key fully-homomorphic encryption scheme with classical keys. - Constant-round zero-knowledge secure against multiple parallel quantum verifiers from spooky encryption for relations computable by quantum circuits. To enable this, we develop a new straight-line non-black-box simulation technique against parallel verifiers that does not clone the adversary's state. This forms the heart of our technical contribution and may also be relevant to the classical setting. - A constant-round post-quantum non-malleable commitment scheme, from the mildly super-polynomial quantum hardness of LWE.
Zhihao Zheng, Jiachen Shen, Zhenfu Cao
ePrint Report
With the location-based services (LBS) booming, the volume of spatial data inevitably explodes. In order to reduce local storage and computational overhead, users tend to outsource data and initiate queries to the cloud. However, sensitive data or queries may be compromised if cloud server has access to raw data and plaintext token. To cope with this problem, searchable encryption for geometric range is applied. Geometric range search has wide applications in many scenarios, especially the circular range search.
In this paper, a practical and secure circular range search scheme (PSCS) is proposed to support searching for spatial data in a circular range. With our scheme, a semi-honest cloud server will return data for a given circular range correctly without uncovering index privacy or query privacy. We propose a polynomial split algorithm which can decompose the inner product calculation neatly. Then, we define the security of our PSCS formally and prove that it is secure under same-closeness-pattern chosen-plaintext attacks (CLS-CPA) in theory. In addition, we demonstrate the efficiency and accuracy through analysis and experiments compared with existing schemes.
In this paper, a practical and secure circular range search scheme (PSCS) is proposed to support searching for spatial data in a circular range. With our scheme, a semi-honest cloud server will return data for a given circular range correctly without uncovering index privacy or query privacy. We propose a polynomial split algorithm which can decompose the inner product calculation neatly. Then, we define the security of our PSCS formally and prove that it is secure under same-closeness-pattern chosen-plaintext attacks (CLS-CPA) in theory. In addition, we demonstrate the efficiency and accuracy through analysis and experiments compared with existing schemes.
Vincenzo Iovino, Serge Vaudenay, Martin Vuagnoux
ePrint Report
Digital contact tracing apps allow to alert people who have been in contact with people who may be contagious. The Apple/Google Exposure Notification (EN) system is based on Bluetooth proximity estimation. It has been adopted by many countries around the world. However, many possible attacks are known. The goal of some of them is to inject a false alert on someone elses phone. This way, an adversary can eliminate a competitor in a sport event or a business in general. Political parties can also prevent people from voting.
In this report, we review several methods to inject false alerts. One of them requires to corrupt the clock of the smartphone of the victim. For that, we build a time-traveling machine to be able to remotely set up the clock on a smartphone and experiment our attack. We show how easy this can be done. We successfully tested several smartphones with either the Swiss or the Italian app (SwissCovid or Immuni).
In this report, we review several methods to inject false alerts. One of them requires to corrupt the clock of the smartphone of the victim. For that, we build a time-traveling machine to be able to remotely set up the clock on a smartphone and experiment our attack. We show how easy this can be done. We successfully tested several smartphones with either the Swiss or the Italian app (SwissCovid or Immuni).
Elette Boyle, Nishanth Chandran, Niv Gilboa, Divya Gupta, Yuval Ishai, Nishant Kumar, Mayank Rathee
ePrint Report
Boyle et al. (TCC 2019) proposed a new approach for secure computation in the preprocessing model building on function secret sharing (FSS), where a gate $g$ is evaluated using an FSS scheme for the related offset family $g_r(x)=g(x+r)$. They further presented efficient FSS schemes based on any pseudorandom generator (PRG) for the offset families of several useful gates $g$ that arise in "mixed-mode'' secure computation. These include gates for zero test, integer comparison, ReLU, and spline functions. The FSS-based approach offers significant savings in online communication and round complexity compared to alternative techniques based on garbled circuits or secret sharing.
In this work, we improve and extend the previous results of Boyle et al. by making the following three kinds of contributions: - Improved Key Size: The preprocessing and storage costs of the FSS-based approach directly depend on the FSS key size. We improve the key size of previous constructions through two steps. First, we obtain roughly 4x reduction in key size for Distributed Comparison Function (DCF), i.e., FSS for the family of functions $f^<_a_,_b(x)$ that output $b$ if $x < a$ and $0$ otherwise. DCF serves as a central building block in the constructions of Boyle et al. Second, we improve the number of DCF instances required for realizing useful gates $g$. For example, whereas previous FSS schemes for ReLU and $m$-piece spline required 2 and $2m$ DCF instances, respectively, ours require only a single instance of DCF in both cases. This improves the FSS key size by 6-22x for commonly used gates such as ReLU and sigmoid. - New Gates: We present the first PRG-based FSS schemes for arithmetic and logical shift gates, as well as for bit-decomposition where both the input and outputs are shared over $Z_N$ for $N = 2^n$. These gates are crucial for many applications related to fixed-point arithmetic and machine learning. - A Barrier: The above results enable a 2-round PRG-based secure evaluation of "multiply-then-truncate,'' a central operation in fixed-point arithmetic, by sequentially invoking FSS schemes for multiplication and shift. We identify a barrier to obtaining a 1-round implementation via a single FSS scheme, showing that this would require settling a major open problem in the area of FSS: namely, a PRG-based FSS for the class of bit-conjunction functions.
In this work, we improve and extend the previous results of Boyle et al. by making the following three kinds of contributions: - Improved Key Size: The preprocessing and storage costs of the FSS-based approach directly depend on the FSS key size. We improve the key size of previous constructions through two steps. First, we obtain roughly 4x reduction in key size for Distributed Comparison Function (DCF), i.e., FSS for the family of functions $f^<_a_,_b(x)$ that output $b$ if $x < a$ and $0$ otherwise. DCF serves as a central building block in the constructions of Boyle et al. Second, we improve the number of DCF instances required for realizing useful gates $g$. For example, whereas previous FSS schemes for ReLU and $m$-piece spline required 2 and $2m$ DCF instances, respectively, ours require only a single instance of DCF in both cases. This improves the FSS key size by 6-22x for commonly used gates such as ReLU and sigmoid. - New Gates: We present the first PRG-based FSS schemes for arithmetic and logical shift gates, as well as for bit-decomposition where both the input and outputs are shared over $Z_N$ for $N = 2^n$. These gates are crucial for many applications related to fixed-point arithmetic and machine learning. - A Barrier: The above results enable a 2-round PRG-based secure evaluation of "multiply-then-truncate,'' a central operation in fixed-point arithmetic, by sequentially invoking FSS schemes for multiplication and shift. We identify a barrier to obtaining a 1-round implementation via a single FSS scheme, showing that this would require settling a major open problem in the area of FSS: namely, a PRG-based FSS for the class of bit-conjunction functions.
Jiang Zhang, Yu Yu, Dengguo Feng, Shuqin Fan, Zhenfeng Zhang
ePrint Report
In this paper, we initiate the study of interactive proofs for the promise problem $\mathsf{QBBC}$ (i.e., quantum black-box computations), which consists of a quantum device $\mathcal{D}$
acting on $(n+m)$ qubits, a classical device $\mathcal{R}_F$ deciding the input-output relation of some unknown function $F:\{0,1\}^n \rightarrow \{0,1\}^m$, and two reals $0< b < a \leq 1$.
Let $p(\mathcal{D},x) = \| |x,F(x)\rangle \langle x,F(x)| \mathcal{D}(|x\rangle |0^m\rangle)\|^2$ be the probability of obtaining $(x,F(x))$ as a result of a standard measurement of the $(n+m)$-qubit
state returned by $\mathcal{D}$ on input $|x\rangle |0^m\rangle$. The task of the problem $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ is to distinguish between two cases for all $x\in \{0,1\}^n$: \\
$\bullet$ YES Instance: $p(\mathcal{D},x) \geq a$;
$\bullet$ NO Instance: $p(\mathcal{D},x) \leq b$.
First, we show that for any constant $15/16< a \leq 1$, the problem $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ has an efficient two-round interactive proof $(\mathcal{P}^{\mathcal{D}},\mathcal{V}^{\mathcal{R}_F})$ which basically allows a verifier $\mathcal{V}$, given a classical black-box device $\mathcal{R}_F$, to efficiently verify if the prover $\mathcal{P}$ has a quantum black-box device $\mathcal{D}$ (correctly) computing $F$. This proof system achieves completeness $\frac{1 + a}{2}$ and soundness error $\frac{31}{32} + \frac{\epsilon}{2} + \mathsf{negl}(n)$ for any constant $\max(0,b-\frac{15}{16})<\epsilon<a - \frac{15}{16}$, given that the verifier $\mathcal{V}$ has some (limited) quantum capabilities. In terms of query complexities, the prover $\mathcal{P}^\mathcal{D}$ will make at most two quantum queries to $\mathcal{D}$, while the verifier $\mathcal{V}^{\mathcal{R}_F}$ only makes a single classical query to $\mathcal{R}_F$. This result is based on an information versus disturbance lemma, which may be of independent interest.
Second, under the learning with errors (LWE) assumption in (Regev 2005), we show that the problem $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ can even have an efficient interactive proof $(\mathcal{P}^{\mathcal{D}},\mathcal{V}^{\mathcal{R}_F})$ with a fully classical verifier $\mathcal{V}$ that does not have any quantum capability. This proof system achieves completeness $\frac{1 + a}{2}-\mathsf{negl}(n)$ and soundness error $\frac{1+b}{2} + \mathsf{negl}(n)$, and thus applies to any $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ with constants $0< b<a \leq 1$. Moreover, this proof system has the same query complexities as above. This result is based on the techniques introduced in (Brakerski et al. 2018) and (Mahadev 2018).
As an application, we show that the problem of distinguishing the random oracle model (ROM) and the quantum random oracle model (QROM) in cryptography can be naturally seen as a $\mathsf{QBBC}$ problem. By applying the above result, we immediately obtain a separation between ROM and QROM under the standard LWE assumption.
$\bullet$ YES Instance: $p(\mathcal{D},x) \geq a$;
$\bullet$ NO Instance: $p(\mathcal{D},x) \leq b$.
First, we show that for any constant $15/16< a \leq 1$, the problem $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ has an efficient two-round interactive proof $(\mathcal{P}^{\mathcal{D}},\mathcal{V}^{\mathcal{R}_F})$ which basically allows a verifier $\mathcal{V}$, given a classical black-box device $\mathcal{R}_F$, to efficiently verify if the prover $\mathcal{P}$ has a quantum black-box device $\mathcal{D}$ (correctly) computing $F$. This proof system achieves completeness $\frac{1 + a}{2}$ and soundness error $\frac{31}{32} + \frac{\epsilon}{2} + \mathsf{negl}(n)$ for any constant $\max(0,b-\frac{15}{16})<\epsilon<a - \frac{15}{16}$, given that the verifier $\mathcal{V}$ has some (limited) quantum capabilities. In terms of query complexities, the prover $\mathcal{P}^\mathcal{D}$ will make at most two quantum queries to $\mathcal{D}$, while the verifier $\mathcal{V}^{\mathcal{R}_F}$ only makes a single classical query to $\mathcal{R}_F$. This result is based on an information versus disturbance lemma, which may be of independent interest.
Second, under the learning with errors (LWE) assumption in (Regev 2005), we show that the problem $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ can even have an efficient interactive proof $(\mathcal{P}^{\mathcal{D}},\mathcal{V}^{\mathcal{R}_F})$ with a fully classical verifier $\mathcal{V}$ that does not have any quantum capability. This proof system achieves completeness $\frac{1 + a}{2}-\mathsf{negl}(n)$ and soundness error $\frac{1+b}{2} + \mathsf{negl}(n)$, and thus applies to any $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ with constants $0< b<a \leq 1$. Moreover, this proof system has the same query complexities as above. This result is based on the techniques introduced in (Brakerski et al. 2018) and (Mahadev 2018).
As an application, we show that the problem of distinguishing the random oracle model (ROM) and the quantum random oracle model (QROM) in cryptography can be naturally seen as a $\mathsf{QBBC}$ problem. By applying the above result, we immediately obtain a separation between ROM and QROM under the standard LWE assumption.
Jean-Philippe Aumasson, Adrian Hamelink, Omer Shlomovits
ePrint Report
Threshold signing research progressed a lot in the last three years, especially for ECDSA, which is less MPC-friendly than Schnorr-based signatures such as EdDSA. This progress was mainly driven by blockchain applications, and boosted by breakthrough results concurrently published by Lindell and by Gennaro & Goldfeder. Since then, several research teams published threshold signature schemes with different features, design trade-offs, building blocks, and proof techniques.
Furthermore, threshold signing is now deployed within major organizations to protect large amounts of digital assets. Researchers and practitioners therefore need a clear view of the research state, of the relative merits of the protocols available, and of the open problems, in particular those that would address "real-world" challenges.
This survey therefore proposes to (1) describe threshold signing and its building blocks in a general, unified way, based on the extended arithmetic black-box formalism (ABB+); (2) review the state-of-the-art threshold signing protocols, highlighting their unique properties and comparing them in terms of security assurance and performance, based on criteria relevant in practice; (3) review the main open-source implementations available.
This survey therefore proposes to (1) describe threshold signing and its building blocks in a general, unified way, based on the extended arithmetic black-box formalism (ABB+); (2) review the state-of-the-art threshold signing protocols, highlighting their unique properties and comparing them in terms of security assurance and performance, based on criteria relevant in practice; (3) review the main open-source implementations available.
Jan Vacek, Jan Václavek
ePrint Report
One of the NIST Post-Quantum Cryptography Standardization Process Round 2 candidates is the NewHope cryptosystem, which is a suite of two RLWE based key encapsulation mechanisms. Recently, four key reuse attacks were proposed against NewHope by Bauer et al., Qin et al., Bhasin et al. and Okada et al. In these attacks, the adversary has access to the key mismatch oracle which tells her if a given ciphertext decrypts to a given message under the targeted secret key. Previous attacks either require more than 26 000 queries to the oracle or they never recover the whole secret key. In this paper, we present a new attack against the NewHope cryptosystem in these key reuse situations. Our attack recovers the whole secret key with the probability of 100% and requires less than 3 200 queries on average. Our work improves state-of-the-art results for NewHope and makes the comparison with other candidates more relevant.
Sanjit Chatterjee, Tapas Pandit, Shravan Kumar Parshuram Puria, Akash Shah
ePrint Report
This work initiates a formal study of signcryption in the quantum setting. We start with formulating suitable security definitions for confidentiality and authenticity of signcryption both in insider and outsider models against quantum adversaries. We investigate the quantum security of generic constructions of signcryption schemes based on three paradigms, viz., encrypt-then-sign (EtS), sign-then-encrypt (StE) and commit-then-encrypt-and-sign (CtE&S). In the insider model, we show that the quantum variants of the classical results hold in the quantum setting with an exception in the StE paradigm. However, in outsider model we need to consider an intermediate setting in which the adversary is given quantum access to unsigncryption oracle but classical access to signcryption oracle. In two-user outsider model, as in the classical setting, we show that post-quantum CPA security of the base encryption scheme is amplified in the EtS paradigm if the base signature scheme satisfies a stronger definition. We prove an analogous result in the StE paradigm. Interestingly, in the multi-user setting, our results strengthen the known classical results. Furthermore, our results for the EtS and StE paradigms in the two-user outsider model also extend to the setting of authenticated encryption. In this course, we point out a flaw in the proof of quantum security of authenticated encryption in the EtS paradigm given in a recent paper. We briefly discuss the difficulties in analyzing the full quantum security of signcryption in outsider model. Finally, we briefly discuss concrete instantiations in various paradigms utilising some available candidates of quantum secure encryption and signature schemes.
Zhiqiang Wu, Kenli Li, Jin Wang, Naixue Xiong
ePrint Report
To expand capacity, many resource-constrained industrial devices encrypt and outsource their private data to
public clouds, employing a searchable encryption (SE) scheme that provides efficient search service directly to
encrypted data. Current tree-based SE schemes can do this and support sublinear encrypted Boolean queries. However,
they all suffer from log n overhead in a search procedure. To resolve the challenge, in this paper, we propose a
new tree structure called the four-branch tree (FB-tree). Our key design is to build a tree node with four branches,
which helps a search reach the destination nodes with fast jumps. Based on the index tree, we setup two systems
for different requirements. The first system for efficiency-first settings achieves nearly optimal search complexity
and adaptive security. The second, constructed via an oblivious RAM for security-first environments, still achieves
worst-case sublinear search complexity. FB-tree performance is extensively evaluated with several real datasets. The
Experimental data demonstrate that the FB-tree-based systems outperform the state-of-the-art solutions in terms of
efficiency and scalability when Boolean queries are issued.
Pratish Datta, Ilan Komargodski, Brent Waters
ePrint Report
We construct the first decentralized multi-authority attribute-based encryption
(MA-ABE) scheme for a non-trivial class of access policies whose security is
based (in the random oracle model) solely on the Learning With Errors (LWE)
assumption. The supported access policies are ones described by DNF
formulas. All previous constructions of MA-ABE schemes supporting any
non-trivial class of access policies were proven secure (in the random oracle
model) assuming various assumptions on bilinear maps.
In our system, any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. A party can simply act as a standard ABE authority by creating a public key and issuing private keys to different users that reflect their attributes. A user can encrypt data in terms of any DNF formulas over attributes issued from any chosen set of authorities. Finally, our system does not require any central authority. In terms of efficiency, when instantiating the scheme with a global bound $s$ on the size of access policies, the sizes of public keys, secret keys, and ciphertexts, all grow with $s$.
Technically, we develop new tools for building ciphertext-policy ABE (CP-ABE) schemes using LWE. Along the way, we construct the first provably secure CP-ABE scheme supporting access policies in $\mathsf{NC}^1$ that avoids the generic universal-circuit-based key-policy to ciphertext-policy transformation. In particular, our construction relies on linear secret sharing schemes with new properties and in some sense is more similar to CP-ABE schemes that rely on bilinear maps. While our CP-ABE construction is not more efficient than existing ones, it is conceptually intriguing and further we show how to extend it to get the MA-ABE scheme described above.
In our system, any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. A party can simply act as a standard ABE authority by creating a public key and issuing private keys to different users that reflect their attributes. A user can encrypt data in terms of any DNF formulas over attributes issued from any chosen set of authorities. Finally, our system does not require any central authority. In terms of efficiency, when instantiating the scheme with a global bound $s$ on the size of access policies, the sizes of public keys, secret keys, and ciphertexts, all grow with $s$.
Technically, we develop new tools for building ciphertext-policy ABE (CP-ABE) schemes using LWE. Along the way, we construct the first provably secure CP-ABE scheme supporting access policies in $\mathsf{NC}^1$ that avoids the generic universal-circuit-based key-policy to ciphertext-policy transformation. In particular, our construction relies on linear secret sharing schemes with new properties and in some sense is more similar to CP-ABE schemes that rely on bilinear maps. While our CP-ABE construction is not more efficient than existing ones, it is conceptually intriguing and further we show how to extend it to get the MA-ABE scheme described above.
Cyril Bouvier, Laurent Imbert
ePrint Report
In this paper, we present new algorithms for the field arithmetic of supersingular isogeny Diffie-Hellman; one of the fifteen remaining candidates in the NIST post-quantum standardization process. Our approach uses a polynomial representation of the field elements together with mechanisms to keep the coefficients within bounds during the arithmetic operations. We present timings and comparisons for SIKEp503 and suggest a novel 736-bit prime that offers a $1.17\times$ speedup compared to SIKEp751 for a similar level of security.
Nai-Hui Chia, Kai-Min Chung, Takashi Yamakawa
ePrint Report
In a recent seminal work, Bitansky and Shmueli (STOC '20) gave the first construction of a constant round zero-knowledge argument for NP secure against quantum attacks.
However, their construction has several drawbacks compared to the classical counterparts.
Specifically, their construction only achieves computational soundness, requires strong assumptions of quantum hardness of learning with errors (QLWE assumption) and the existence of quantum fully homomorphic encryption (QFHE), and relies on non-black-box simulation.
In this paper, we resolve these issues at the cost of weakening the notion of zero-knowledge to what is called $\epsilon$-zero-knowledge. Concretely, we construct the following protocols:
- We construct a constant round interactive proof for NP that satisfies statistical soundness and black-box $\epsilon$-zero-knowledge against quantum attacks assuming the existence of collapsing hash functions, which is a quantum counterpart of collision-resistant hash functions. Interestingly, this construction is just an adapted version of the classical protocol by Goldreich and Kahan (JoC '96) though the proof of $\epsilon$-zero-knowledge property against quantum adversaries requires novel ideas.
- We construct a constant round interactive argument for NP that satisfies computational soundness and black-box $\epsilon$-zero-knowledge against quantum attacks only assuming the existence of post-quantum one-way functions.
At the heart of our results is a new quantum rewinding technique that enables a simulator to extract a committed message of a malicious verifier while simulating verifier's internal state in an appropriate sense.
In this paper, we resolve these issues at the cost of weakening the notion of zero-knowledge to what is called $\epsilon$-zero-knowledge. Concretely, we construct the following protocols:
- We construct a constant round interactive proof for NP that satisfies statistical soundness and black-box $\epsilon$-zero-knowledge against quantum attacks assuming the existence of collapsing hash functions, which is a quantum counterpart of collision-resistant hash functions. Interestingly, this construction is just an adapted version of the classical protocol by Goldreich and Kahan (JoC '96) though the proof of $\epsilon$-zero-knowledge property against quantum adversaries requires novel ideas.
- We construct a constant round interactive argument for NP that satisfies computational soundness and black-box $\epsilon$-zero-knowledge against quantum attacks only assuming the existence of post-quantum one-way functions.
At the heart of our results is a new quantum rewinding technique that enables a simulator to extract a committed message of a malicious verifier while simulating verifier's internal state in an appropriate sense.
Il-Ju Kim, Tae-Ho Lee, Jaeseung Han, Bo-Yeon Sim, Dong-Guk Han
ePrint Report
Dilithium is a lattice-based digital signature, one of the finalist candidates in the NIST's standardization process for post-quantum cryptography. In this paper, we propose a first side-channel attack on the process of signature generation of Dilithium. During the Dilithium signature generation process, we used NTT encryption single-trace for machine learning-based profiling attacks. In addition, it is possible to attack masked Dilithium using sparse multiplication. The proposed method is shown through experiments that all key values can be exposed 100% through a single-trace regardless of the optimization level.
Tapas Pal, Ratna Dutta
ePrint Report
A multi-identity pure fully homomorphic encryption (MIFHE) enables a server to perform arbitrary computation on the ciphertexts that are encrypted under different identities. In case of multi-attribute pure FHE (MAFHE), the ciphertexts are associated with different attributes. Clear and McGoldrick (CANS 2014) gave the first chosen-plaintext attack secure MIFHE and MAFHE based on indistinguishability obfuscation. In this study, we focus on building MIFHE and MAFHE which are se-
cure under type 1 of chosen-ciphertext attack (CCA1) security model. In particular, using witness pseudorandom functions (Zhandry, TCC 2016) and multi-key pure FHE or MFHE (Mukherjee and Wichs, EUROCRYPT 2016) we propose the following constructions:
CCA secure identity-based encryption (IBE) that enjoys an optimal size ciphertexts, which we extend to a CCA1 secure MIFHE scheme.
CCA secure attribute-based encryption (ABE) having an optimal size ciphertexts, which we transform into a CCA1 secure MAFHE scheme.
By optimal size, we mean that the bit-length of a ciphertext is the bit-length of the message plus a security parameter multiplied with a constant. Known constructions of multi-identity(attribute) FHEs are either leveled, that is, support only bounded depth circuit evaluations or secure in a weaker CPA security model. With our new approach, we achieve both CCA1 security and evaluation on arbitrary depth circuits for multi-identity(attribute) FHE schemes.
By optimal size, we mean that the bit-length of a ciphertext is the bit-length of the message plus a security parameter multiplied with a constant. Known constructions of multi-identity(attribute) FHEs are either leveled, that is, support only bounded depth circuit evaluations or secure in a weaker CPA security model. With our new approach, we achieve both CCA1 security and evaluation on arbitrary depth circuits for multi-identity(attribute) FHE schemes.
Jia-Chng Loh, Geong-Sen Poh, Jason H. M. Ying, Jia Xu, Hoon Wei Lim, Jonathan Pan, Weiyang Wong
ePrint Report
Prior works in privacy-preserving biometric authentication mostly focus on the following setting. An organization collects users' biometric data during registration and later authorized access to the organization services after successful authentication. Each organization has to maintain its own biometric database. Similarly each user has to release her biometric information to multiple organizations; Independently, government authorities are making their extensive, nation-wide biometric database available to agencies and organizations, for countries that allow such access. This will enable organizations to provide authentication without maintaining biometric databases, while users only need to register once. However privacy remains a concern. We propose a privacy-preserving system, PBio, for this new setting. The core component of PBio is a new protocol comprising distance recoverable encryption and secure distance computation. We introduce an encrypt-then-split mechanism such that each of the organizations holds only an encrypted partial biometric database. This minimizes the risk of template reconstruction in the event that the encrypted partial database is recovered due to leak of the encryption key. PBio is also secure even when the organizations collude. A by-product benefit is that the use of encrypted partial templates allows quicker rejection for non-matching instances. We implemented a cloud-based prototype with desktop and Android applications. Our experiment results based on real remote users show that PBio is highly efficient. A round-trip authentication takes approximately 74ms (desktop) and 626ms (Android). The computation and communication overhead introduced by our new cryptographic protocol is only about 10ms (desktop) and 54ms (Android).
Borja Gómez
ePrint Report
In this paper the author introduces methods that represent elements of a Finite Field $F_q$ as matrices that linearize certain operations like the product of elements in $F_q$. Since the Central Polynomial Map $\mathcal{F}(X)$ coming from the HFE scheme involves multiplication of elements in a Finite Field $F_q$, using a \textit{novel method} based in Linear Algebra the Quadratic Forms resulting from the polynomial map of the Public Key can be computed in few steps and these are bounded by the matrix $R$ that represents the linear action of the polynomial remainder modulo $f(t)$, which is the irreducible polynomial that identifies $F_q$. When the irreducible polynomial $f(t)$ is of the form $t^a+t^b+1$ \textit{modulo $2$}, the matrix $R$ is computed deterministically in few steps and all the Quadratic Forms are derived from this matrix. The research done tells that the central Polynomial Map $\mathcal{F}(X)$ is computed extremely fast, for example, in the CAS \textit{Mathematica}, taking an HFE Polynomial, Quadratic Forms are computed in $\textcolor{red}{\approx 1.4}$ seconds for the case $n=128$. This raises the more general lemma that Quadratic Forms obtained from BigField schemes are entirely dependent on the selected irreducible polynomial $f(t)$ as the matrix $R$ is conditioned by the structure of this polynomial.
Aaqib Bashir Dar, Asif Iqbal Baba, Auqib Hamid Lone, Roohie Naaz, Fan Wu
ePrint Report
Access Control or authorization is referred to as the confinement of specific actions of an entity to perform an action. Blockchain driven access control mechanisms have gained considerable attention since applications for blockchain were found beyond the premises of cryptocurrencies. However, there are no systematic efforts to analyze existing empirical evidence. To this end, we aim to synthesize literature to understand the state-of-the-art in blockchain driven access control mechanisms with respect to underlying platforms, utilized blockchain properties, nature of the models and associated testbeds & tools. We conducted the review in a systematic way. Meta Analysis and thematic synthesis was performed on the findings and results from the relevant primary studies in order to answer the research questions in perspective. We identified 76 relevant primary studies passing the quality assessment. A number of problems like single point of failure, security, privacy etc were targeted by the relevant primary studies. The meta analysis suggests the use of different blockchain platforms, several application domains and different utilized blockchain properties. In this paper, we present a state of the art review of blockchain driven access control systems. In hindsight, we present a taxonomy of blockchain driven access control systems to better under the immense implications this field has over various application domains.
Alex Lombardi, Vinod Vaikuntanathan
ePrint Report
A hash function family $\mathcal{H}$ is correlation-intractable for a $t$-input relation $\mathcal{R}$ if, given a random function $h$ chosen from $\mathcal{H}$, it is hard to find $x_1,\ldots,x_t$ such that $\mathcal{R}(x_1,\ldots,x_t,h(x_1),\ldots,h(x_t))$ is true. Recent works have constructed correlation-intractable hash families for single-input relations from standard cryptographic assumptions. However, the case of multi-input relations (even for $t=2$) is wide open: there are two known constructions, the first of which relies on a very strong ``brute-force-is-best'' type of hardness assumption (Holmgren and Lombardi, FOCS 2018); and the second only achieves the much weaker notion of output intractability (Zhandry, CRYPTO 2016).
Our main result is the construction of several multi-input correlation intractable hash families for large classes of interesting input-dependent relations from either the learning with errors (LWE) assumption or from indistinguishability obfuscation.
Our constructions follow from a simple and modular approach to constructing correlation-intractable hash functions using shift-hiding shiftable functions (Peikert-Shiehian, PKC 2018). This approach also gives an alternative framework (as compared to Peikert-Shiehian, CRYPTO 2019) for achieving single-input correlation intractability (and NIZKs for NP) based on LWE.
Our main result is the construction of several multi-input correlation intractable hash families for large classes of interesting input-dependent relations from either the learning with errors (LWE) assumption or from indistinguishability obfuscation.
Our constructions follow from a simple and modular approach to constructing correlation-intractable hash functions using shift-hiding shiftable functions (Peikert-Shiehian, PKC 2018). This approach also gives an alternative framework (as compared to Peikert-Shiehian, CRYPTO 2019) for achieving single-input correlation intractability (and NIZKs for NP) based on LWE.
Bas Westerbaan
ePrint Report
We show that lazily Barrett reducing when computing the inverse number theoretic transform (NTT) is optimal.