International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 February 2022

Thibauld Feneuil, Antoine Joux, Matthieu Rivain
ePrint Report ePrint Report
Zero-knowledge proofs of knowledge are useful tools to design signature schemes. The ongoing effort to build a quantum computer urges the cryptography community to develop new secure cryptographic protocols based on quantum-hard cryptographic problems. One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) for random linear codes. This problem is known to be NP-hard and the cryptanalysis state of the art has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. Since its publication, many articles proposed optimizations, implementation, or variants.

In this paper, we introduce a new zero-knowledge proof for the syndrome decoding problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. Specifically, we propose a 5-round zero-knowledge protocol that proves the knowledge of a vector x such that y=Hx and wt(x)<= w and which achieves a soundness error closed to 1/N for an arbitrary N.

While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for 128-bit security when relying on the hardness of the SD problem on binary fields. Using larger fields (like $\mathbb{F}_{2^8}$), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and to be competitive with SPHINCS+, both post-quantum signature candidates in the ongoing NIST standardization effort. Moreover, our scheme outperforms all the existing code-based signature schemes for the common ``signature size + public key size'' metric.
Expand
Sebastian Kolby, Divya Ravi, Sophia Yakoubov
ePrint Report ePrint Report
YOSO MPC (Gentry et al., Crypto 2021) is a new MPC framework where each participant can speak at most once. This models an adaptive adversary’s ability to watch the network and corrupt or destroy parties it deems significant based on their communication. By using private channels to anonymous receivers (e.g. by encrypting to a public key whose owner is unknown), the communication complexity of YOSO MPC can scale sublinearly with the total number $N$ of available parties, even when the adversary’s corruption threshold is linear in $N$ (e.g. just under $N/2$). It was previously an open problem whether YOSO MPC can achieve guaranteed output delivery in a constant number of rounds without relying on trusted setup. In this work, we show that this can indeed be accomplished. Using linearly homomorphic encryption and secret sharing, we construct YOSO-LHE, which is the first realistically efficient YOSO MPC protocol that achieves guaranteed output delivery without trusted setup. YOSO-LHE is not itself constant-round; it takes $O(d)$ rounds of communication, where $d$ is the depth of the circuit being computed. However, YOSO-LHE can be used to bootstrap any constant-round YOSO protocol that requires setup, by generating that setup within YOSO-LHE. As long as the complexity of the setup is independent of the circuit to be evaluated, the bootstrapped protocol will be constant-round.
Expand
Seunghwan Lee, Dong-Joon Shin
ePrint Report ePrint Report
We propose a floating-point fully homomorphic encryption(FPFHE) is proposed based on torus fully homomorphic encryptionequipped with programmable bootstrapping. Specifically, FPFHEfor 32-bit and 64-bit floating-point messages is implemented, which isstate-of-the-art in terms of precision. Also, a ciphertext is constructedto check if an overflow had occurred or not while evaluating arithmeticcircuits with FPFHE, which is useful when the message space or arith-metic circuit is too complex to estimate a bound of outputs such as deeplearning applications
Expand
Nir Bitansky, Sapir Freizeit
ePrint Report ePrint Report
We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK). The protocol is plausibly post-quantum secure. The only other construction with plausible post quantum security is that of Brakerski and Do ̈tling (TCC ‘18) based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques. Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK.
Expand
Jian Guo, Guozhen Liu, Ling Song, Yi Tu
ePrint Report ePrint Report
In this work, we focus on collision attacks against instances of \shac hash family in both classical and quantum settings. Since the 5-round collision attacks on \shacc-256 and other variants proposed by Guo \etal at JoC~2020, no other essential progress has been published. With a thorough investigation, we identify that the challenges of extending such collision attacks on \shac to more rounds lie in the inefficiency of differential trail search. To overcome this obstacle, we develop a \sat automatic search toolkit. The tool is used in multiple intermediate steps of the collision attacks and exhibits surprisingly high efficiency in differential trail search and other optimization problems encountered in the process. As a result, we present the first 6-round classical collision attack on \shakea with time complexity \cpshake, which also forms a quantum collision attack with quantum time \cpshakeq, and the first 6-round quantum collision attack on \shacc-224 and \shacc-256 with quantum time \cpshattf and \cpshatfs, both with negligible requirement of classical and quantum memory. The fact that classical collision attacks do not apply to 6-round \shacc-224 and \shacc-256 shows the higher coverage of quantum collision attacks, which is consistent with that on SHA-2 observed by Hosoyamada and Sasaki at CRYPTO~2021.
Expand
Liu zhang, Zilong Wang, Boyang Wang
ePrint Report ePrint Report
In CRYPTO'19, Gohr proposed a new cryptanalysis strategy using machine learning algorithms. Combining the neural distinguisher with a differential path and integrating the advanced key recovery procedure, Gohr achieved a 12-round key recovery attack on Speck32/64. Chen and Yu improved accuracy of neural distinguisher considering derived features from multiple-ciphertext pairs instead of single-ciphertext pairs. Bao et al. presented the concept of generalized neutral bits, improved 12-round, and devised a practical 13-round key recovery attack on Speck32/64 by enhancing the differential path prepended on the top of neural distinguisher. To capture more dimensional information, inspired by the Inception Blocks in GoogLeNet, we use multiple parallel convolutional layers as the core of the network structure to train neural distinguisher. For Speck32/64, we improve the accuracy of (5-8)-round neural distinguisher and train a new 9-round neural distinguisher. For Simon32/64, we improve the accuracy of (7-11)-round neural distinguisher and train a new 12-round neural distinguisher. In addition, we extend the idea of neutral bits in Gohr's work to solve the requirement of the same distribution of multiple-plaintext pairs in key recovery attack. Under the combined effect of multiple improvements, the time complexity of our (11-13)-round key recovery attacks for Speck32/64 has been reduced to a certain extent. Surprisingly, the success rate of our 12-round key recovery attack can reach $100\%$. For Simon32/64, we increase the success rate and decrease the time complexity of 16-round key recovery attack. Also, we successfully implement a 17-round key recovery attack. Sadly, because the accuracy of 9-round neural distinguisher of Speck32/64 and 12-round neural distinguisher of Simon32/64 is lower, they can't be used for more rounds key recovery attack. \footnote{The source codes are available in https://drive.google.com/drive/folders/17AWYupor_bX2rEe000tPuppdNH4dlmHm?usp=sharing.
Expand
Si Gao, Elisabeth Oswald
ePrint Report ePrint Report
Non-specific leakage detection (initially introduced as “Test Vector Leakage Assessment”, short TVLA) plays a vital role in practice because it detects (potential) leaks independently of assumptions about the leakage model and any specific attack vector. However, the nonspecific nature means detected leaks might not be exploitable, and thus the current state of the art is to employ a battery of specific attacks to confirm the detection outcomes.

We propose a novel leakage assessment framework which enables to link non-specific leakage detection outcomes with size of the key guess that is necessary to exploit them. We therefore solve the problem of deciding if or not a leak is exploitable without the need for specific attacks. Our methodology furthermore enables (for a detected leak) to reveal the specific key bytes, and with that, it allows the construction of confirmatory attacks. This novel approach is enabled by proposing to cast the leakage detection problem as the statistical task of building key-dependent regression models: if such a model exists, then we know that the point leaks. Depending on the size and nature of the model, we can further judge the exploitability and provide a concrete attack vector.
Expand
Thomas Attema, Ignacio Cascudo, Ronald Cramer, Ivan Bjerre Damgård, Daniel Escudero
ePrint Report ePrint Report
Compressed $\Sigma$-Protocol Theory (CRYPTO 2020) presents an ``alternative'' to Bulletproofs that achieves the same communication complexity while adhering more elegantly to existing $\Sigma$-protocol theory, which enables their techniques to be directly applicable to other widely used settings in the context of ``plug \& play'' algorithmics. Unfortunately, their techniques are restricted to arithmetic circuits over \emph{prime} fields, which rules out the possibility of using more machine-friendly moduli such as powers of $2$, which have proven to improve efficiency in applications. In this work we show that such techniques can be generalized to the case of arithmetic circuits modulo \emph{any} number. This enables the use of powers of $2$, which can prove to be beneficial for efficiency, but it also facilitates the use of other moduli that might prove useful in different applications.

In order to achieve this, we first present an instantiation of the main building block of the theory of compressed $\Sigma$-protocols, namely compact vector commitments. Our construction, which may be of independent interest, is homomorphic modulo any positive integer $m$, a result that was not known in the literature before. Second, we generalize the Compressed $\Sigma$-Protocol Theory from finite fields to $\mathbb{Z}_m$. The main challenge here is ensuring that there are large enough challenge sets as to fulfill the necessary soundness requirements, which is achieved by considering certain ring extensions. Our techniques have application as publicly verifiable zero knowledge proofs of correct computation on homomorphically encrypted data, where for a more flexible parameter instantiation it is useful that the ciphertext space is allowed to be a modular or Galois ring rather than a field: concretely, our protocols can be plugged as a commit-and-proof argument into a recent result on efficient verifiable computation schemes on encrypted data with context-hiding (PKC 21) which exploited this advantage.
Expand
Orel Cosseron, Clément Hoffmann, Pierrick Méaux, Françoix-Xavier Standaert
ePrint Report ePrint Report
Hybrid Homomorphic Encryption (HHE) reduces the amount of computation client-side and band- width usage in a Fully Homomorphic Encryption (FHE) framework. HHE requires the usage of specific sym- metric schemes that can be evaluated homomorphically efficiently. In this paper, we introduce the paradigm of Group Filter Permutator (GFP) as a generalization of the Improved Filter Permutator paradigm introduced by M ́eaux et al. From this paradigm, we specify Elisabeth , a family of stream cipher and give an instance: Elisabeth-4 . After proving the security of this scheme, we provide a Rust implementation of it and ensure its performance is comparable to state-of-the-art HHE. The true strength of Elisabeth lies in the available opera- tions server-side: while the best HHE applications were limited to a few multiplications server-side, we used data sent through Elisabeth-4 to homomorphically evaluate a neural network inference. Finally, we discuss the improvement and loss between the HHE and the FHE framework and give ideas to build more efficient schemes from the Elisabeth family
Expand
Rishab Goyal, Vinod Vaikuntanathan
ePrint Report ePrint Report
Aggregate signatures (Boneh, Gentry, Lynn, Shacham, Eurocrypt 2003) enable compressing a set of $N$ signatures on $N$ different messages into a short aggregate signature. This reduces the space complexity of storing the signatures from linear in $N$ to a fixed constant (that depends only on the security parameter). However, verifying the aggregate signature requires access to all $N$ messages, resulting in the complexity of verification being at least $\Omega(N)$.

In this work, we introduce the notion of locally verifiable aggregate signatures that enable efficient verification: given a short aggregate signature $\sigma$ (corresponding to a set $\mathcal{M}$ of $N$ messages), the verifier can check whether a particular message $m$ is in the set, in time independent of $N$. Verification does not require knowledge of the entire set $\mathcal{M}$. We demonstrate many natural applications of locally verifiable aggregate signature schemes: in the context of certificate transparency logs; in blockchains; and for redacting signatures, even when all the original signatures are produced by a single user.

We provide two constructions of single-signer locally verifiable aggregate signatures, the first based on the RSA assumption and the second on the bilinear Diffie-Hellman inversion assumption, both in the random oracle model.

As an additional contribution, we introduce the notion of compressing cryptographic keys in identity-based encryption (IBE) schemes, show applications of this notion, and construct an IBE scheme where the secret keys for $N$ identities can be compressed into a single aggregate key, which can then be used to decrypt ciphertexts sent to any of the $N$ identities.
Expand
Iftach Haitner, Daniel Nukrai, Eylon Yogev
ePrint Report ePrint Report
Succinct non-interactive arguments (SNARGs) have become a fundamental primitive in the cryptographic community. The focus of this work is constructions of SNARGs in the Random Oracle Model (ROM). Such SNARGs enjoy post-quantum security and can be deployed using lightweight cryptography to heuristically instantiate the random oracle. A ROM-SNARG is \emph{$(t,\varepsilon)$-sound} if no $t$-query malicious prover can convince the verifier to accept a false statement with probability larger than $\varepsilon$. Recently, Chiesa-Yogev (CRYPTO '21) presented a ROM-SNARG of length ${\Theta}(\log (t/\varepsilon) \cdot \log t)$ (ignoring $\log n$ factors, for $n$ being the instance size). This improvement, however, is still far from the (folklore) lower bound of $\Omega(\log (t/\varepsilon))$.

Assuming the \textit{randomized exponential-time hypothesis}, we prove a tight lower bound of ${\Omega}(\log (t/\varepsilon) \cdot \log t)$ for the length of {$(t,\varepsilon)$-sound} ROM-SNARGs. Our lower bound holds for constructions with non-adaptive verifiers and strong soundness notion called \textit{salted soundness}, restrictions that hold for \emph{all} known constructions (ignoring contrived counterexamples). We prove our lower bound by transforming any short ROM-SNARG (of the considered family) into a same length ROM-SNARG in which the verifier asks only a \emph{few} oracles queries, and then apply the recent lower bound of Chiesa-Yogev (TCC '20) for such SNARGs.
Expand
Mingxun Zhou, Elaine Shi
ePrint Report ePrint Report
The shuffle model has been extensively investigated in the distributed differential privacy (DP) literature. For a class of useful computational tasks, the shuffle model allows us to achieve privacy-utility tradeoff similar to those in the central model, while shifting the trust from a central data curator to a ``trusted shuffle'' which can be implemented through either trusted hardware or cryptography. Very recently, several works explored cryptographic instantiations of a new type of shuffle with relaxed security, called {\it differentially oblivious (DO) shuffles}. These works demonstrate that by relaxing the shuffler's security from simulation-style secrecy to differential privacy, we can achieve asymptotical efficiency improvements. A natural question arises, can we replace the shuffler in distributed DP mechanisms with a DO-shuffle while retaining a similar privacy-utility tradeoff?

In this paper, we prove an optimal privacy amplification theorem by composing any locally differentially private (LDP) mechanism with a DO-shuffler, achieving parameters that tightly match the shuffle model. Moreover, we explore multi-message protocols in the DO-shuffle model, and construct mechanisms for the real summation and histograph problems. Our error bounds approximate the best known results in the multi-message shuffle-model up to sub-logarithmic factors. Our results also suggest that just like in the shuffle model, allowing each client to send multiple messages is fundamentally more powerful than restricting to a single message.
Expand
Minze Xu, Yuan Zhang, Sheng Zhong
ePrint Report ePrint Report
Fairness is one of the fundamental properties for multiparty computation (MPC) protocols. Although fair MPC protocols for general functions is shown to be impossible with a dishonest majority, a variant of fairness called ``fairness with penalty'' has been explored recently. A MPC protocol provides fairness with penalty if either all participants can get the output, or the dishonest parties who break the protocol after getting the output will be financially penalized. Fairness with penalty is enabled by previous works leveraging the emerging distributed ledger systems (DLS), e.g. Bitcoin and Ethereum. They utilize the scripting functionality provided by the DLSs to make automatic penalty practical without relying on any trusted third party. However, there is also a significant number of DLSs that do not provide the scripting functionality. In this paper, we propose the ROSE protocol which enables fairness with penalty while only requiring the underlying DLS can verify and broadcast digital signatures on transactions. This requirement can be fulfilled by almost all DLSs, including the scriptless DLSs. To the best of our knowledge, it is still unknown how to realize fairness with penalty on scriptless DLSs before our work. We also provide a implementation of ROSE. The experimental results show that applying ROSE only brings little computation and communication overhead.
Expand
Roi Bar-Zur, Ameer Abu-Hanna, Ittay Eyal, Aviv Tamar
ePrint Report ePrint Report
The security of proof-of-work blockchain protocols critically relies on incentives. Their operators, called miners, receive rewards for creating blocks containing user-generated transactions. Each block rewards its creator with newly minted tokens and with transaction fees paid by the users. The protocol stability is violated if any of the miners surpasses a threshold ratio of the computational power; she is then motivated to deviate with selfish mining and increase her rewards.

Previous analyses of selfish mining strategies assumed constant rewards. But with statistics from operational systems, we show that there are occasional whales~-- blocks with exceptional rewards. Modeling this behavior implies a state-space that grows exponentially with the parameters, becoming prohibitively large for existing analysis tools.

We present the WeRLman framework to analyze such models. WeRLman uses deep Reinforcement Learning (RL), inspired by the state-of-the-art AlphaGo Zero algorithm. Directly extending AlphaGo Zero to a stochastic model leads to high sampling noise, which is detrimental to the learning process. Therefore, WeRLman employs novel variance reduction techniques by exploiting the recurrent nature of the system and prior knowledge of transition probabilities. Evaluating WeRLman against models we can accurately solve demonstrates it achieves unprecedented accuracy in deep RL for blockchain.

We use WeRLman to analyze the incentives of a rational miner in various settings and upper-bound the security threshold of Bitcoin-like blockchains. The previously known bound, with constant rewards, stands at~0.25. We show that considering whale transactions reduces this threshold considerably. In particular, with Bitcoin historical fees and its future minting policy, its threshold for deviation will drop to~0.2 in 10 years,~0.17 in 20 years, and to~0.12 in 30 years. With recent fees from the Ethereum smart-contract platform, the threshold drops to~0.17. These are below the common sizes of large miners.
Expand
Jiangshan Long, Changhai Ou, Yajun Ma, Yifan Fan, Hua Chen, Shihui Zheng
ePrint Report ePrint Report
Benefiting from its independence of leakage model, side-channel collision attack is one of the most common distinguishers and attracts wide attention. Although several improvements have been given, its performance on attacking a single collision value has not been significantly improved. Its optimization and efficiency is still an open problem. To solve this, we theoretically analyze the quantitative relationship between encryptions and collisions in this paper, and propose an efficient side-channel attack named Collision-Paired Correlation Attack (CPCA) for low noise scenarios to guarantee that the side with fewer samples in a collision to be detected is completely paired. This optimizes the inefficient utilization of collision information in the existing collision attacks. Moreover, to further exploit the collision information, we maximize the collision pairing, and this optimization significantly improves CPCA and extends our CPCA to large noise scenarios. Finally, to reduce computation complexity, we further optimize our CPCA to a CPA-like distinguisher. Our further theoretical study fully illustrates that our CPCA provides the upper security bound of CECA, and experimental results fully show its superiority.
Expand
Ron D. Rothblum, Prashant Nalini Vasudevan
ePrint Report ePrint Report
Collision-resistant hash functions (CRH) are a fundamental and ubiquitous cryptographic primitive. Several recent works have studied a relaxation of CRH called t-way multi-collision-resistant hash functions (t-MCRH). These are families of functions for which it is computationally hard to find a t-way collision, even though such collisions are abundant (and even (t-1)-way collisions may be easy to find). The case of t=2 corresponds to standard CRH, but it is natural to study t-MCRH for larger values of t.

Multi-collision-resistance seems to be a qualitatively weaker property than standard collision-resistance. In particular, Komargodski et al. (Eurocrypt, 2018) showed that there does not exist a blackbox transformation of MCRH into CRH. Nevertheless, in this work we show a non-blackbox transformation of any moderately shrinking t-MCRH, for t in {3,4}, into an (infinitely often secure) CRH. This transformation is non-constructive - we can prove the existence of a CRH but cannot explicitly point out a construction.

Our result partially extends to larger values of t. In particular, we show that for suitable values of t>t', we can transform a t-MCRH into a t'-MCRH, at the cost of reducing the shrinkage of the resulting hash function family and settling for infinitely often security. This result utilizes the list-decodability properties of Reed-Solomon codes.
Expand
Corina-Elena Bogos, Razvan Mocanu, Emil Simion
ePrint Report ePrint Report
This paper represents a cumulative review of the serial statistical test over the canonical values used in testing and freely generated values. Also in this paper, we study by simulation, the variation of second type error, depending on certain factors: the range of p1,the length of the bit string represented by n and the value of m-bit pattern.
Expand
Nicolas Alhaddad, Sisi Duan, Mayank Varia, Haibin Zhang
ePrint Report ePrint Report
This paper improves upon two fundamental and closely related primitives in fault-tolerant distributed computing---Byzantine reliable broadcast (BRB) and asynchronous verifiable information dispersal (AVID). We make improvements asymptotically (for our AVID construction), concretely (much lower hidden constants), and practically (having 3 steps, using hash functions only, and avoiding using online error correction on the bulk data).

The state of the art BRB protocol of Das, Xiang, and Ren (DXR BRB, CCS 2021) uses hash functions only and achieves a communication overhead of $O(nL + kn^2)$, where $n$, $L$, and $k$ are the number of replicas, the message length, and the security parameter, respectively. More precisely, DXR BRB incurs a concrete communication of $7nL + 2kn^2$, with a large constant 7 for the bulk data term (i.e., the $nL$ term). Das, Xiang, and Ren asked an open question if it is possible "from a practical point of view to make the hidden constants small." Two other limitations of DXR BRB that authors emphasized are that "higher computation costs due to encoding and decoding of the message" due to applying error correcting codes on bulk data and the fact that "in the presence of malicious nodes, each honest node may have to try decoding $f$ times" due to the use of an online error correcting algorithm. Meanwhile, the state of the art AVID protocols achieve $O(L+kn^2)$ communication assuming trusted setup. Apparently, there is a mismatch between BRB and AVID protocols: another natural open problem is whether it is possible to build a setup-free AVID protocol with $O(L+kn^2)$ communication.

In this work, we answer all these open questions in the affirmative. We first provide a hash-based BRB protocol that improves concretely on DXR BRB, having low constants and avoiding using online error correction on bulk data. Our key insight is to encode the consistency proof, not just the message. Our technique allows disseminating the message and proof together. Then we provide the first setup-free AVID protocol achieving $O(L+kn^2)$ communication. Both our BRB and AVID protocols are practical because they have 3 steps, a multiplicative factor of 3 for the bulk data term, use hash functions only, and they avoid applying online error correction on bulk data.
Expand
Foteini Baldimtsi, Panagiotis Chatzigiannis, S. Dov Gordon, Phi Hung Le, Daniel McVicker
ePrint Report ePrint Report
We present gOTzilla, a protocol for interactive zero-knowledge proofs for large disjunctive statements of the following format: given publicly known circuit $C$, and set of values $Y = \{y_1, \ldots, y_n\}$, prove knowledge of a witness $x$ such that $C(x) = y_1 \lor C(x) = y_2 \lor \cdots \lor C(x) = y_n$. These type of statements are extremely important for the proof of assets (PoA) problem in cryptocurrencies where a prover wants to prove the knowledge of a secret key $sk$ that associates with the hash of a public key $H(pk)$ posted on the ledger.

gOTzilla is based on the MPC in the head (MPCitH) paradigm and is based on the observation that if we restructure the proof statement to an equivalent of proving knowledge of $(x,y)$ such that $(C(x) = y) \land (y = y_1 \lor \cdots \lor y = y_n))$, then we can reduce the disjunction of equalities to 1-out-of-N oblivious transfer (OT). We additionally provide a concrete, efficient extension of our protocol for the case where $C$ combines algebraic and non-algebraic statements (which is the case in the PoA application). We achieve an asymptotic communication cost of $O(\log n)$ plus the proof size of the underlying MPCitH protocol. While related work has similar asymptotic complexity, our approach results in concrete performance improvements. We implement our protocol and provide benchmarks. Concretely, for a set of size 1 million entries, the total run-time of our protocol is 14.89 seconds using 48 threads, with 6.18 MB total communication, which is about 4x faster compared to the state of the art when considering a disjunctive statement with algebraic and non-algebraic elements.
Expand
Markku-Juhani O. Saarinen
ePrint Report ePrint Report
NIST SP 800-22 describes 15 statistical tests and suggests that they can be used for the evaluation of random and pseudorandom number generators in cryptographic applications. The Chinese standard GM/T 0005-2012 describes similar tests. The weakest of pseudorandom number generators will easily pass these tests, which promotes false confidence in insecure systems. Evaluation of pseudorandom generators and sequences should be based on cryptanalytic principles. Implementation validation should be focused on algorithmic correctness, not the randomness of output. For true random (entropy sources), the focus should be on the true entropy content and reliability of the construction and health tests. If the SP 800-22 is to be revised, we suggest the new SP focuses on evaluating stochastic models for entropy sources as the SP 800-90 series currently does not address this issue in depth. We further suggest that pseudorandom generators are analyzed for their suitability for post-quantum cryptography and lack of (asymmetric) backdoors or covert channels. We illustrate this by discussing the ``reference generators'' in SP 800-22 Appendix D, none of which are suitable for use in modern cryptography.
Expand
◄ Previous Next ►