International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

10 November 2020

Fengrong Zhang, Enes Pasalic, René Rodríguez, Yongzhuang Wei
ePrint Report ePrint Report
A special class of linear codes, having important applications in secret sharing and secure two-party computation, called minimal is characterized by the property that none of the codewords is covered by some other codeword. Denoting by $w_{min}$ and $w_{max}$ the minimal and maximal weight of a binary linear code respectively, a sufficient but not necessary condition for achieving minimality is that $w_{min} /w_{max} > 1/2$ (called Ashikhmin-Barg’s bound). Those minimal codes satisfying the condition $w_{min} /w_{max} \leq 1/2$ are called wide in this article (generally harder to construct), whereas codes satisfying $w_{min} /w_{max} > 1/2$ are called narrow. In this article, we first show that the so-called direct sum of Boolean functions of the form $h(x,y) = f(x) + g(y)$ induces narrow minimal codes whenever g is a bent function. Then, we introduce the concept of non-covering permutations (referring to the property of minimality) which is shown to be sufficient for providing many infinite classes of minimal binary linear codes of larger dimension by employing a suitable subspace of derivatives of the bent function g. In the second part of this article, we first provide one efficient method (with easily satisfied initial conditions) of generating wide minimal codes. Then, we again consider the use of derivatives (along with the underlying Boolean function given as the direct sum) for the purpose of defining another class of wide minimal codes. To the best of our knowledge, the latter method is the most general framework for designing wide binary linear codes. It uses a (suitable) subspace of derivatives of $h(x,y) = f(x) + g(y)$, where $g$ is a bent function and f satisfies certain minimality requirements. For a fixed suitable function $f$, one can derive a huge class of non-equivalent wide binary linear codes of the same length by varying the permutation $\phi$ when specifying the bent function $g(y^{(1)} , y^{(2)} ) = $\phi( y^{(2)} )\cdot y^{(1)} $ in the Maiorana-McFarland class.
Expand
Chi-Ming Marvin Chung, Vincent Hwang, Matthias J. Kannwischer, Gregor Seiler, Cheng-Jhih Shih, Bo-Yin Yang
ePrint Report ePrint Report
In this paper, we show how multiplication for polynomial rings used in the NIST PQC finalists Saber and NTRU can be efficiently implemented using the Number-theoretic transform (NTT). We obtain superior performance compared to the previous state of the art implementations using Toom–Cook multiplication on both NIST’s primary software optimization targets AVX2 and Cortex-M4. Interestingly, these two platforms require different approaches: On the Cortex-M4, we use 32-bit NTT-based polynomial multiplication, while on Intel we use two 16-bit NTT-based polynomial multiplications and combine the products using the Chinese Remainder Theorem (CRT). For Saber, the performance gain is particularly pronounced. On Cortex-M4, the Saber NTT-based matrix-vector multiplication is 61% faster than the Toom–Cook multiplication resulting in a 22% speed-up of Saber encapsulation. For NTRU, the speed-up is less impressive, but still NTT-based multiplication performs better than Toom–Cook for all parameter sets on Cortex-M4. The NTT-based polynomial multiplication for NTRU-HRSS is 10% faster than Toom–Cook which results in a 6% speed-up for encapsulation. On AVX2, we obtain speed-ups for three out of four NTRU parameter sets. As a further illustration, we also include code for AVX2 and Cortex-M4 for the Chinese Association for Cryptologic Research competition award winner LAC (also a NIST round 2 candidate) which outperforms existing code.
Expand
Kyoohyung Han, Jinhyuck Jeong, Jung Hoon Sohn, Yongha Son
ePrint Report ePrint Report
Recently, privacy-preserving logistic regression techniques on distributed data among several data owners drew attention in terms of their applicability in federated learning environment. Many of them have been built upon cryptographic primitives such as secure multiparty computations(MPC) and homomorphic encryptions(HE) to protect the privacy of data. The secure multiparty computation provides fast and secure unit operations for arithmetic and bit operations but they often does not scale with large data well enough due to large computation cost and communication overhead. From recent works, many HE primitives provide their operations in a batch sense so that the technique can be an appropriate choice in a big data environment. However computationally expensive operations such as ciphertext slot rotation or refreshment(so called bootstrapping) and large public key size are hurdles that hamper widespread of the technique in the industry-level environment.

In this paper, we provide a new hybrid approach of a privacy-preserving logistic regression training and a inference, which utilizes both MPC and HE techniques to provide efficient and scalable solution while minimizing needs of key management and complexity of computation in encrypted state. Utilizing batch sense properties of HE, we present a method to securely compute multiplications of vectors and matrices using one HE multiplication, compared to the naive approach which requires linear number of multiplications regarding to the size of input data. We also show how we used a 2-party additive secret sharing scheme to control noises of expensive HE operations such as bootstrapping efficiently.
Expand
Amit Agarwal, James Bartusek, Vipul Goyal, Dakshita Khurana, Giulio Malavolta
ePrint Report ePrint Report
We initiate the study of multi-party computation for classical functionalities (in the plain model) with security against malicious polynomial-time quantum adversaries. We observe that existing techniques readily give a polynomial-round protocol, but our main result is a construction of constant-round post-quantum multi-party computation. We assume mildly super-polynomial quantum hardness of learning with errors (LWE), and polynomial quantum hardness of an LWE-based circular security assumption. Along the way, we develop the following cryptographic primitives that may be of independent interest:

- A spooky encryption scheme for relations computable by quantum circuits, from the quantum hardness of an LWE-based circular security assumption. This yields the first quantum multi-key fully-homomorphic encryption scheme with classical keys. - Constant-round zero-knowledge secure against multiple parallel quantum verifiers from spooky encryption for relations computable by quantum circuits. To enable this, we develop a new straight-line non-black-box simulation technique against parallel verifiers that does not clone the adversary's state. This forms the heart of our technical contribution and may also be relevant to the classical setting. - A constant-round post-quantum non-malleable commitment scheme, from the mildly super-polynomial quantum hardness of LWE.
Expand
Zhihao Zheng, Jiachen Shen, Zhenfu Cao
ePrint Report ePrint Report
With the location-based services (LBS) booming, the volume of spatial data inevitably explodes. In order to reduce local storage and computational overhead, users tend to outsource data and initiate queries to the cloud. However, sensitive data or queries may be compromised if cloud server has access to raw data and plaintext token. To cope with this problem, searchable encryption for geometric range is applied. Geometric range search has wide applications in many scenarios, especially the circular range search.

In this paper, a practical and secure circular range search scheme (PSCS) is proposed to support searching for spatial data in a circular range. With our scheme, a semi-honest cloud server will return data for a given circular range correctly without uncovering index privacy or query privacy. We propose a polynomial split algorithm which can decompose the inner product calculation neatly. Then, we define the security of our PSCS formally and prove that it is secure under same-closeness-pattern chosen-plaintext attacks (CLS-CPA) in theory. In addition, we demonstrate the efficiency and accuracy through analysis and experiments compared with existing schemes.
Expand
Vincenzo Iovino, Serge Vaudenay, Martin Vuagnoux
ePrint Report ePrint Report
Digital contact tracing apps allow to alert people who have been in contact with people who may be contagious. The Apple/Google Exposure Notification (EN) system is based on Bluetooth proximity estimation. It has been adopted by many countries around the world. However, many possible attacks are known. The goal of some of them is to inject a false alert on someone else’s phone. This way, an adversary can eliminate a competitor in a sport event or a business in general. Political parties can also prevent people from voting.

In this report, we review several methods to inject false alerts. One of them requires to corrupt the clock of the smartphone of the victim. For that, we build a time-traveling machine to be able to remotely set up the clock on a smartphone and experiment our attack. We show how easy this can be done. We successfully tested several smartphones with either the Swiss or the Italian app (SwissCovid or Immuni).
Expand
Elette Boyle, Nishanth Chandran, Niv Gilboa, Divya Gupta, Yuval Ishai, Nishant Kumar, Mayank Rathee
ePrint Report ePrint Report
Boyle et al. (TCC 2019) proposed a new approach for secure computation in the preprocessing model building on function secret sharing (FSS), where a gate $g$ is evaluated using an FSS scheme for the related offset family $g_r(x)=g(x+r)$. They further presented efficient FSS schemes based on any pseudorandom generator (PRG) for the offset families of several useful gates $g$ that arise in "mixed-mode'' secure computation. These include gates for zero test, integer comparison, ReLU, and spline functions. The FSS-based approach offers significant savings in online communication and round complexity compared to alternative techniques based on garbled circuits or secret sharing.

In this work, we improve and extend the previous results of Boyle et al. by making the following three kinds of contributions: - Improved Key Size: The preprocessing and storage costs of the FSS-based approach directly depend on the FSS key size. We improve the key size of previous constructions through two steps. First, we obtain roughly 4x reduction in key size for Distributed Comparison Function (DCF), i.e., FSS for the family of functions $f^<_a_,_b(x)$ that output $b$ if $x < a$ and $0$ otherwise. DCF serves as a central building block in the constructions of Boyle et al. Second, we improve the number of DCF instances required for realizing useful gates $g$. For example, whereas previous FSS schemes for ReLU and $m$-piece spline required 2 and $2m$ DCF instances, respectively, ours require only a single instance of DCF in both cases. This improves the FSS key size by 6-22x for commonly used gates such as ReLU and sigmoid. - New Gates: We present the first PRG-based FSS schemes for arithmetic and logical shift gates, as well as for bit-decomposition where both the input and outputs are shared over $Z_N$ for $N = 2^n$. These gates are crucial for many applications related to fixed-point arithmetic and machine learning. - A Barrier: The above results enable a 2-round PRG-based secure evaluation of "multiply-then-truncate,'' a central operation in fixed-point arithmetic, by sequentially invoking FSS schemes for multiplication and shift. We identify a barrier to obtaining a 1-round implementation via a single FSS scheme, showing that this would require settling a major open problem in the area of FSS: namely, a PRG-based FSS for the class of bit-conjunction functions.
Expand
Jiang Zhang, Yu Yu, Dengguo Feng, Shuqin Fan, Zhenfeng Zhang
ePrint Report ePrint Report
In this paper, we initiate the study of interactive proofs for the promise problem $\mathsf{QBBC}$ (i.e., quantum black-box computations), which consists of a quantum device $\mathcal{D}$ acting on $(n+m)$ qubits, a classical device $\mathcal{R}_F$ deciding the input-output relation of some unknown function $F:\{0,1\}^n \rightarrow \{0,1\}^m$, and two reals $0< b < a \leq 1$. Let $p(\mathcal{D},x) = \| |x,F(x)\rangle \langle x,F(x)| \mathcal{D}(|x\rangle |0^m\rangle)\|^2$ be the probability of obtaining $(x,F(x))$ as a result of a standard measurement of the $(n+m)$-qubit state returned by $\mathcal{D}$ on input $|x\rangle |0^m\rangle$. The task of the problem $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ is to distinguish between two cases for all $x\in \{0,1\}^n$: \\

$\bullet$ YES Instance: $p(\mathcal{D},x) \geq a$;

$\bullet$ NO Instance: $p(\mathcal{D},x) \leq b$.

First, we show that for any constant $15/16< a \leq 1$, the problem $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ has an efficient two-round interactive proof $(\mathcal{P}^{\mathcal{D}},\mathcal{V}^{\mathcal{R}_F})$ which basically allows a verifier $\mathcal{V}$, given a classical black-box device $\mathcal{R}_F$, to efficiently verify if the prover $\mathcal{P}$ has a quantum black-box device $\mathcal{D}$ (correctly) computing $F$. This proof system achieves completeness $\frac{1 + a}{2}$ and soundness error $\frac{31}{32} + \frac{\epsilon}{2} + \mathsf{negl}(n)$ for any constant $\max(0,b-\frac{15}{16})<\epsilon<a - \frac{15}{16}$, given that the verifier $\mathcal{V}$ has some (limited) quantum capabilities. In terms of query complexities, the prover $\mathcal{P}^\mathcal{D}$ will make at most two quantum queries to $\mathcal{D}$, while the verifier $\mathcal{V}^{\mathcal{R}_F}$ only makes a single classical query to $\mathcal{R}_F$. This result is based on an information versus disturbance lemma, which may be of independent interest.

Second, under the learning with errors (LWE) assumption in (Regev 2005), we show that the problem $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ can even have an efficient interactive proof $(\mathcal{P}^{\mathcal{D}},\mathcal{V}^{\mathcal{R}_F})$ with a fully classical verifier $\mathcal{V}$ that does not have any quantum capability. This proof system achieves completeness $\frac{1 + a}{2}-\mathsf{negl}(n)$ and soundness error $\frac{1+b}{2} + \mathsf{negl}(n)$, and thus applies to any $\mathsf{QBBC}(\mathcal{D},\mathcal{R}_F,a,b)$ with constants $0< b<a \leq 1$. Moreover, this proof system has the same query complexities as above. This result is based on the techniques introduced in (Brakerski et al. 2018) and (Mahadev 2018).

As an application, we show that the problem of distinguishing the random oracle model (ROM) and the quantum random oracle model (QROM) in cryptography can be naturally seen as a $\mathsf{QBBC}$ problem. By applying the above result, we immediately obtain a separation between ROM and QROM under the standard LWE assumption.
Expand
Jean-Philippe Aumasson, Adrian Hamelink, Omer Shlomovits
ePrint Report ePrint Report
Threshold signing research progressed a lot in the last three years, especially for ECDSA, which is less MPC-friendly than Schnorr-based signatures such as EdDSA. This progress was mainly driven by blockchain applications, and boosted by breakthrough results concurrently published by Lindell and by Gennaro & Goldfeder. Since then, several research teams published threshold signature schemes with different features, design trade-offs, building blocks, and proof techniques. Furthermore, threshold signing is now deployed within major organizations to protect large amounts of digital assets. Researchers and practitioners therefore need a clear view of the research state, of the relative merits of the protocols available, and of the open problems, in particular those that would address "real-world" challenges.

This survey therefore proposes to (1) describe threshold signing and its building blocks in a general, unified way, based on the extended arithmetic black-box formalism (ABB+); (2) review the state-of-the-art threshold signing protocols, highlighting their unique properties and comparing them in terms of security assurance and performance, based on criteria relevant in practice; (3) review the main open-source implementations available.
Expand
Jan Vacek, Jan Václavek
ePrint Report ePrint Report
One of the NIST Post-Quantum Cryptography Standardization Process Round 2 candidates is the NewHope cryptosystem, which is a suite of two RLWE based key encapsulation mechanisms. Recently, four key reuse attacks were proposed against NewHope by Bauer et al., Qin et al., Bhasin et al. and Okada et al. In these attacks, the adversary has access to the key mismatch oracle which tells her if a given ciphertext decrypts to a given message under the targeted secret key. Previous attacks either require more than 26 000 queries to the oracle or they never recover the whole secret key. In this paper, we present a new attack against the NewHope cryptosystem in these key reuse situations. Our attack recovers the whole secret key with the probability of 100% and requires less than 3 200 queries on average. Our work improves state-of-the-art results for NewHope and makes the comparison with other candidates more relevant.
Expand
Sanjit Chatterjee, Tapas Pandit, Shravan Kumar Parshuram Puria, Akash Shah
ePrint Report ePrint Report
This work initiates a formal study of signcryption in the quantum setting. We start with formulating suitable security definitions for confidentiality and authenticity of signcryption both in insider and outsider models against quantum adversaries. We investigate the quantum security of generic constructions of signcryption schemes based on three paradigms, viz., encrypt-then-sign (EtS), sign-then-encrypt (StE) and commit-then-encrypt-and-sign (CtE&S). In the insider model, we show that the quantum variants of the classical results hold in the quantum setting with an exception in the StE paradigm. However, in outsider model we need to consider an intermediate setting in which the adversary is given quantum access to unsigncryption oracle but classical access to signcryption oracle. In two-user outsider model, as in the classical setting, we show that post-quantum CPA security of the base encryption scheme is amplified in the EtS paradigm if the base signature scheme satisfies a stronger definition. We prove an analogous result in the StE paradigm. Interestingly, in the multi-user setting, our results strengthen the known classical results. Furthermore, our results for the EtS and StE paradigms in the two-user outsider model also extend to the setting of authenticated encryption. In this course, we point out a flaw in the proof of quantum security of authenticated encryption in the EtS paradigm given in a recent paper. We briefly discuss the difficulties in analyzing the full quantum security of signcryption in outsider model. Finally, we briefly discuss concrete instantiations in various paradigms utilising some available candidates of quantum secure encryption and signature schemes.
Expand
Zhiqiang Wu, Kenli Li, Jin Wang, Naixue Xiong
ePrint Report ePrint Report
To expand capacity, many resource-constrained industrial devices encrypt and outsource their private data to public clouds, employing a searchable encryption (SE) scheme that provides efficient search service directly to encrypted data. Current tree-based SE schemes can do this and support sublinear encrypted Boolean queries. However, they all suffer from log n overhead in a search procedure. To resolve the challenge, in this paper, we propose a new tree structure called the four-branch tree (FB-tree). Our key design is to build a tree node with four branches, which helps a search reach the destination nodes with fast jumps. Based on the index tree, we setup two systems for different requirements. The first system for efficiency-first settings achieves nearly optimal search complexity and adaptive security. The second, constructed via an oblivious RAM for security-first environments, still achieves worst-case sublinear search complexity. FB-tree performance is extensively evaluated with several real datasets. The Experimental data demonstrate that the FB-tree-based systems outperform the state-of-the-art solutions in terms of efficiency and scalability when Boolean queries are issued.
Expand
Pratish Datta, Ilan Komargodski, Brent Waters
ePrint Report ePrint Report
We construct the first decentralized multi-authority attribute-based encryption (MA-ABE) scheme for a non-trivial class of access policies whose security is based (in the random oracle model) solely on the Learning With Errors (LWE) assumption. The supported access policies are ones described by DNF formulas. All previous constructions of MA-ABE schemes supporting any non-trivial class of access policies were proven secure (in the random oracle model) assuming various assumptions on bilinear maps.

In our system, any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. A party can simply act as a standard ABE authority by creating a public key and issuing private keys to different users that reflect their attributes. A user can encrypt data in terms of any DNF formulas over attributes issued from any chosen set of authorities. Finally, our system does not require any central authority. In terms of efficiency, when instantiating the scheme with a global bound $s$ on the size of access policies, the sizes of public keys, secret keys, and ciphertexts, all grow with $s$.

Technically, we develop new tools for building ciphertext-policy ABE (CP-ABE) schemes using LWE. Along the way, we construct the first provably secure CP-ABE scheme supporting access policies in $\mathsf{NC}^1$ that avoids the generic universal-circuit-based key-policy to ciphertext-policy transformation. In particular, our construction relies on linear secret sharing schemes with new properties and in some sense is more similar to CP-ABE schemes that rely on bilinear maps. While our CP-ABE construction is not more efficient than existing ones, it is conceptually intriguing and further we show how to extend it to get the MA-ABE scheme described above.
Expand
Cyril Bouvier, Laurent Imbert
ePrint Report ePrint Report
In this paper, we present new algorithms for the field arithmetic of supersingular isogeny Diffie-Hellman; one of the fifteen remaining candidates in the NIST post-quantum standardization process. Our approach uses a polynomial representation of the field elements together with mechanisms to keep the coefficients within bounds during the arithmetic operations. We present timings and comparisons for SIKEp503 and suggest a novel 736-bit prime that offers a $1.17\times$ speedup compared to SIKEp751 for a similar level of security.
Expand
Nai-Hui Chia, Kai-Min Chung, Takashi Yamakawa
ePrint Report ePrint Report
In a recent seminal work, Bitansky and Shmueli (STOC '20) gave the first construction of a constant round zero-knowledge argument for NP secure against quantum attacks. However, their construction has several drawbacks compared to the classical counterparts. Specifically, their construction only achieves computational soundness, requires strong assumptions of quantum hardness of learning with errors (QLWE assumption) and the existence of quantum fully homomorphic encryption (QFHE), and relies on non-black-box simulation.

In this paper, we resolve these issues at the cost of weakening the notion of zero-knowledge to what is called $\epsilon$-zero-knowledge. Concretely, we construct the following protocols:

- We construct a constant round interactive proof for NP that satisfies statistical soundness and black-box $\epsilon$-zero-knowledge against quantum attacks assuming the existence of collapsing hash functions, which is a quantum counterpart of collision-resistant hash functions. Interestingly, this construction is just an adapted version of the classical protocol by Goldreich and Kahan (JoC '96) though the proof of $\epsilon$-zero-knowledge property against quantum adversaries requires novel ideas.

- We construct a constant round interactive argument for NP that satisfies computational soundness and black-box $\epsilon$-zero-knowledge against quantum attacks only assuming the existence of post-quantum one-way functions.

At the heart of our results is a new quantum rewinding technique that enables a simulator to extract a committed message of a malicious verifier while simulating verifier's internal state in an appropriate sense.
Expand
Il-Ju Kim, Tae-Ho Lee, Jaeseung Han, Bo-Yeon Sim, Dong-Guk Han
ePrint Report ePrint Report
Dilithium is a lattice-based digital signature, one of the finalist candidates in the NIST's standardization process for post-quantum cryptography. In this paper, we propose a first side-channel attack on the process of signature generation of Dilithium. During the Dilithium signature generation process, we used NTT encryption single-trace for machine learning-based profiling attacks. In addition, it is possible to attack masked Dilithium using sparse multiplication. The proposed method is shown through experiments that all key values can be exposed 100% through a single-trace regardless of the optimization level.
Expand
Tapas Pal, Ratna Dutta
ePrint Report ePrint Report
A multi-identity pure fully homomorphic encryption (MIFHE) enables a server to perform arbitrary computation on the ciphertexts that are encrypted under different identities. In case of multi-attribute pure FHE (MAFHE), the ciphertexts are associated with different attributes. Clear and McGoldrick (CANS 2014) gave the first chosen-plaintext attack secure MIFHE and MAFHE based on indistinguishability obfuscation. In this study, we focus on building MIFHE and MAFHE which are se- cure under type 1 of chosen-ciphertext attack (CCA1) security model. In particular, using witness pseudorandom functions (Zhandry, TCC 2016) and multi-key pure FHE or MFHE (Mukherjee and Wichs, EUROCRYPT 2016) we propose the following constructions: – CCA secure identity-based encryption (IBE) that enjoys an optimal size ciphertexts, which we extend to a CCA1 secure MIFHE scheme. – CCA secure attribute-based encryption (ABE) having an optimal size ciphertexts, which we transform into a CCA1 secure MAFHE scheme.

By optimal size, we mean that the bit-length of a ciphertext is the bit-length of the message plus a security parameter multiplied with a constant. Known constructions of multi-identity(attribute) FHEs are either leveled, that is, support only bounded depth circuit evaluations or secure in a weaker CPA security model. With our new approach, we achieve both CCA1 security and evaluation on arbitrary depth circuits for multi-identity(attribute) FHE schemes.
Expand
Jia-Chng Loh, Geong-Sen Poh, Jason H. M. Ying, Jia Xu, Hoon Wei Lim, Jonathan Pan, Weiyang Wong
ePrint Report ePrint Report
Prior works in privacy-preserving biometric authentication mostly focus on the following setting. An organization collects users' biometric data during registration and later authorized access to the organization services after successful authentication. Each organization has to maintain its own biometric database. Similarly each user has to release her biometric information to multiple organizations; Independently, government authorities are making their extensive, nation-wide biometric database available to agencies and organizations, for countries that allow such access. This will enable organizations to provide authentication without maintaining biometric databases, while users only need to register once. However privacy remains a concern. We propose a privacy-preserving system, PBio, for this new setting. The core component of PBio is a new protocol comprising distance recoverable encryption and secure distance computation. We introduce an encrypt-then-split mechanism such that each of the organizations holds only an encrypted partial biometric database. This minimizes the risk of template reconstruction in the event that the encrypted partial database is recovered due to leak of the encryption key. PBio is also secure even when the organizations collude. A by-product benefit is that the use of encrypted partial templates allows quicker rejection for non-matching instances. We implemented a cloud-based prototype with desktop and Android applications. Our experiment results based on real remote users show that PBio is highly efficient. A round-trip authentication takes approximately 74ms (desktop) and 626ms (Android). The computation and communication overhead introduced by our new cryptographic protocol is only about 10ms (desktop) and 54ms (Android).
Expand
Borja Gómez
ePrint Report ePrint Report
In this paper the author introduces methods that represent elements of a Finite Field $F_q$ as matrices that linearize certain operations like the product of elements in $F_q$. Since the Central Polynomial Map $\mathcal{F}(X)$ coming from the HFE scheme involves multiplication of elements in a Finite Field $F_q$, using a \textit{novel method} based in Linear Algebra the Quadratic Forms resulting from the polynomial map of the Public Key can be computed in few steps and these are bounded by the matrix $R$ that represents the linear action of the polynomial remainder modulo $f(t)$, which is the irreducible polynomial that identifies $F_q$. When the irreducible polynomial $f(t)$ is of the form $t^a+t^b+1$ \textit{modulo $2$}, the matrix $R$ is computed deterministically in few steps and all the Quadratic Forms are derived from this matrix. The research done tells that the central Polynomial Map $\mathcal{F}(X)$ is computed extremely fast, for example, in the CAS \textit{Mathematica}, taking an HFE Polynomial, Quadratic Forms are computed in $\textcolor{red}{\approx 1.4}$ seconds for the case $n=128$. This raises the more general lemma that Quadratic Forms obtained from BigField schemes are entirely dependent on the selected irreducible polynomial $f(t)$ as the matrix $R$ is conditioned by the structure of this polynomial.
Expand
Aaqib Bashir Dar, Asif Iqbal Baba, Auqib Hamid Lone, Roohie Naaz, Fan Wu
ePrint Report ePrint Report
Access Control or authorization is referred to as the confi nement of specifi c actions of an entity to perform an action. Blockchain driven access control mechanisms have gained considerable attention since applications for blockchain were found beyond the premises of cryptocurrencies. However, there are no systematic efforts to analyze existing empirical evidence. To this end, we aim to synthesize literature to understand the state-of-the-art in blockchain driven access control mechanisms with respect to underlying platforms, utilized blockchain properties, nature of the models and associated testbeds & tools. We conducted the review in a systematic way. Meta Analysis and thematic synthesis was performed on the findings and results from the relevant primary studies in order to answer the research questions in perspective. We identifi ed 76 relevant primary studies passing the quality assessment. A number of problems like single point of failure, security, privacy etc were targeted by the relevant primary studies. The meta analysis suggests the use of different blockchain platforms, several application domains and different utilized blockchain properties. In this paper, we present a state of the art review of blockchain driven access control systems. In hindsight, we present a taxonomy of blockchain driven access control systems to better under the immense implications this field has over various application domains.
Expand
◄ Previous Next ►