IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
05 November 2019
Geoffroy Couteau, Bill Roscoe, Peter Ryan
We describe a new protocol to achieve two party $\epsilon$-fair exchange: at any point in the unfolding of the protocol the difference in the probabilities of the parties having acquired the desired term is bounded by a value $epsilon$; that can be made as small as necessary. Our construction uses oblivious transfer and sidesteps previous impossibility results by using a timed-release encryption, that releases its contents only after some lower bounded time. We show that our protocol can be easily generalized to an $\epsilon$-fair two-party protocol for all functionalities. To our knowledge, this is the first protocol to truly achieve $\epsilon$-fairness for all functionalities. All previous constructions achieving some form of fairness for all functionalities (without relying on a trusted third party) had a strong limitation: the fairness guarantee was only guaranteed to hold if the honest parties are at least as powerful as the corrupted parties and invest a similar amount of resources in the protocol, an assumption which is often not realistic. Our construction does not have this limitation: our protocol provides a clear upper bound on the running time of all parties, and partial fairness holds even if the corrupted parties have much more time or computational power than the honest parties. Interestingly, this shows that a minimal use of timed-release encryption suffices to circumvent an impossibility result of Katz and Gordon regarding $\epsilon$-fair computation for all functionalities, without having to make the (unrealistic) assumption that the honest parties are as computationally powerful as the corrupted parties - this assumption was previously believed to be unavoidable in order to overcome this impossibility result. We present detailed security proofs of the new construction, which are non-trivial and form the core technical contribution of this work.
Keiichi Iwamura, Ahmad Akmal Aminuddin Mohd Kamal
In this paper, we describe two new protocols for secure secrecy computation with information theoretical security against the semi-honest adversary and the dishonest majority. Typically, unconditionally secure secrecy computation using the secret sharing scheme is considered impossible under the setting of $n<2k-1$. Therefore, in our previous work, we first took the approach of finding conditions required for secure secrecy computation under the setting of $n<2k-1$ and realized a new technique of conditionally secure secrecy computation. We showed that secrecy computation using a secret sharing scheme can be realized with a semi-honest adversary with the following three preconditions: (1) the value of a secret and a random number used in secrecy multiplication does not include 0; (2) there is a set of shares on 1 that is constructed from random numbers that are unknown to the adversary; and (3) in secrecy computation involving consecutive computation, the position of shares in a set of shares that are handled by each server is fixed. In this paper, we differentiate the relationship between the parameter $n$ of $(k,n)$-threshold secret sharing scheme and N of the number of servers/parties, and realize secrecy computation of multiplication under the setting of $k\le N<2k-1$. In addition, we improve the processing speed of our protocol by dividing the computation process into a Preprocessing Phase and a Computation Phase and shifting the cost for communication to the Preprocessing Phase. This technique allows for information that does not depend on any of the private values, to be generated in advance and significantly reduce the cost of communication in the Computation Phase. For example, for secrecy computation without repetition, the cost for communication can be totally removed in the Computation Phase. As a result, we realize the method for secrecy computation that is faster compared to conventional methods. In addition, our protocols provided solutions for the aforementioned three preconditions and realize secure secrecy computation without any limitation in terms of usability.
Nir Bitansky, Omri Shmueli
We construct the first constant-round zero-knowledge classical argument for NP secure against quantum attacks. We assume the existence of quantum fully homomorphic encryption and other standard primitives, known based on the Learning with Errors Assumption for quantum algorithms. As a corollary, we also obtain the first constant-round zero-knowledge quantum argument for QMA.
At the heart of our protocol is a new no-cloning non-black-box simulation technique.
At the heart of our protocol is a new no-cloning non-black-box simulation technique.
Hamad Al Shehhi, Emanuele Bellini, Filipe Borba, Florian Caullery, Marc Manzano, Victor Mateu
The use of rank instead of Hamming metric has been proposed to address the main drawback of code-based cryptography: large key sizes. There exist several Key Encapsulation Mechanisms (KEM) and Public Key Encryption (PKE) schemes using rank metric including some submissions to the NIST call for standardization of Post-Quantum Cryptography. In this work, we present an IND-CCA PKE scheme based on the McEliece adaptation to rank metric proposed by Loidreau at PQC 2017. This IND-CCA PKE scheme based on rank metric does not use a hybrid construction KEM + symmetric encryption. Instead, we take advantage of the bigger message space obtained by the different parameters chosen in rank metric, being able to exchange multiple keys in one ciphertext. Our proposal is designed considering some specific properties of the random error generated during the encryption. We prove our proposal IND-CCA-secure in the QROM by using a security notion called disjoint simulatability introduced by Saito et al. in Eurocrypt 2018. Moreover, we provide security bounds by using the semi-oracles introduced by Ambainis et al.
Maran van Heesch, Niels van Adrichem, Thomas Attema, Thijs Veugen
Estimating that in 10 years time quantum computers capable of breaking public-key cryptography currently considered safe could exist, this threat is already eminent for information that require secrecy for more than 10 years. Considering the time required to standardize, implement and update existing networks signifies the urgency of adopting quantum-safe cryptography.
In this work, we investigate the trade-off between network and CPU overhead and the security levels defined by NIST. To do so, we integrate adapted OpenSSL libraries into OpenVPN, and perform experiments on a large variety of quantum-safe algorithms for respectively TLS versions 1.2 and 1.3 using OpenVPN and HTTPS independently. We describe the difficulties we encounter with the integration and we report the experimental performance results, comparing setting up the quantum-safe connection with setting up the connection without additional post-quantum cryptography.
In this work, we investigate the trade-off between network and CPU overhead and the security levels defined by NIST. To do so, we integrate adapted OpenSSL libraries into OpenVPN, and perform experiments on a large variety of quantum-safe algorithms for respectively TLS versions 1.2 and 1.3 using OpenVPN and HTTPS independently. We describe the difficulties we encounter with the integration and we report the experimental performance results, comparing setting up the quantum-safe connection with setting up the connection without additional post-quantum cryptography.
Panos Kampanakis, Dimitrios Sikeridis
The recent advances and attention to quantum computing have raised serious security concerns among IT professionals. The ability of a quantum computer to efficiently solve (elliptic curve) discrete logarithm, and integer factorization problems poses a threat to current public key exchange, encryption, and digital signature schemes.
Consequently, in 2016 NIST initiated an open call for quantum-resistant crypto algorithms. This process, currently in its second round, has yielded nine signature algorithms for possible standardization. In this work, we are evaluating two post-quantum signature use-cases and analyze the signature schemes that seem most appropriate for them. We first consider Hash-Based Signatures for software signing and secure boot. We propose suitable parameters and show that their acceptable performance makes them good candidates for image signing. We then evaluate NIST candidate post-quantum signatures for TLS 1.3. We show that Dilithium and Falcon are the best available options but come with an impact on TLS performance. Finally, we present challenges and potential solutions introduced by these algorithms.
Consequently, in 2016 NIST initiated an open call for quantum-resistant crypto algorithms. This process, currently in its second round, has yielded nine signature algorithms for possible standardization. In this work, we are evaluating two post-quantum signature use-cases and analyze the signature schemes that seem most appropriate for them. We first consider Hash-Based Signatures for software signing and secure boot. We propose suitable parameters and show that their acceptable performance makes them good candidates for image signing. We then evaluate NIST candidate post-quantum signatures for TLS 1.3. We show that Dilithium and Falcon are the best available options but come with an impact on TLS performance. Finally, we present challenges and potential solutions introduced by these algorithms.
Stanislaw Jarecki, Hugo Krawczyk, Jason Resch
We introduce Oblivious Key Management Systems (KMS) as a more secure alternative to traditional wrapping-based KMS that form the backbone of key management in large-scale data storage deployments. The new system, that builds on Oblivious Pseudorandom Functions (OPRF), hides keys and object identifiers from the KMS, offers unconditional security for key transport, provides key verifiability, reduces storage, and more. Further, we show how to provide all these features in a distributed threshold implementation that enhances protection against server compromise.
We extend this system with updatable encryption capability that supports key updates (known as key rotation) so that upon the periodic change of OPRF keys by the KMS server, a very efficient update procedure allows a client of the KMS service to non-interactively update all its encrypted data to be decryptable only by the new key. This enhances security with forward and post-compromise security, namely, security against future and past compromises, respectively, of the client's OPRF keys held by the KMS. Additionally, and in contrast to traditional KMS, our solution supports public key encryption and dispenses with any interaction with the KMS for data encryption (only decryption by the client requires such communication).
Our solutions build on recent work on updatable encryption but with significant enhancements applicable to the remote KMS setting. In addition to the critical security improvements, our designs are highly efficient and ready for use in practice. We report on experimental implementation and performance.
We extend this system with updatable encryption capability that supports key updates (known as key rotation) so that upon the periodic change of OPRF keys by the KMS server, a very efficient update procedure allows a client of the KMS service to non-interactively update all its encrypted data to be decryptable only by the new key. This enhances security with forward and post-compromise security, namely, security against future and past compromises, respectively, of the client's OPRF keys held by the KMS. Additionally, and in contrast to traditional KMS, our solution supports public key encryption and dispenses with any interaction with the KMS for data encryption (only decryption by the client requires such communication).
Our solutions build on recent work on updatable encryption but with significant enhancements applicable to the remote KMS setting. In addition to the critical security improvements, our designs are highly efficient and ready for use in practice. We report on experimental implementation and performance.
Ameirah al Abdouli, Emanuele Bellini, Florian Caullery, Marc Manzano, Victor Mateu
Since its invention by McEliece in 1978, cryptography based on Error Correcting Codes (ECC) has suffered from the reputation of not being suitable for constrained devices. Indeed, McEliece's scheme and its variants have large public keys and relatively long ciphertexts.
Recent works on these downsides explored the possible use of ECC based on rank metric instead of Hamming metric.
These codes were introduced in the late 80's to eliminate errors with repeating patterns, regardless of their Hamming weight.
Numerous proposals for the NIST Post-Quantum Cryptography (PQC) competition rely on these codes.
It has been proven that lattice-based cryptography and even hash-based signatures can run on lightweight devices,
but the question remains for code-based cryptography.
In this work, we demonstrate that this is actually possible for rank metric:
we have implemented the encryption operation of 5 schemes based on ECC in rank metric and made them run on an Arm Cortex-M0 processor, the smallest Arm processor available.
We describe the technical difficulties of porting rank-based cryptography to a resource-constrained device while maintaining decent performance and a suitable level of security against side-channel attacks, especially timing attacks.
Jens-Peter Kaps, William Diehl, Michael Tempelmeier, Farnoud Farahmand, Ekawat Homsirikamol, Kris Gaj
In this paper, we propose a comprehensive framework for fair and efficient benchmarking of hardware implementations of lightweight cryptography (LWC). Our framework is centered around the hardware API (Application Programming Interface) for the implementations of lightweight authenticated ciphers, hash functions, and cores combining both functionalities. The major parts of our API include the minimum compliance criteria, interface, and communication protocol supported by the LWC core. The proposed API is intended to meet the requirements of all candidates submitted to the NIST Lightweight Cryptography standardization process, as well as all CAESAR candidates and current authenticated cipher and hash function standards. In order to speed-up the development of hardware implementations compliant with this API, we are making available the LWC Development Package and the corresponding Implementers Guide. Equipped with these resources, hardware designers can focus on implementing only a core functionality of a given algorithm. The development package facilitates the communication with external modules, full verification of the LWC core using simulation, and generation of optimized results. The proposed API for lightweight cryptography is a superset of the CAESAR Hardware API, endorsed by the organizers of the CAESAR competition, which was successfully used in the development of over 50 implementations of Round 2 and Round 3 CAESAR candidates. The primary extensions include support for optional hash functionality and the development of cores resistant against side-channel attacks. Similarly, the LWC Development Package is a superset of the part of the CAESAR Development Package responsible for support of Use Case 1 (lightweight) CAESAR candidates. The primary extensions include support for hash functionality, increasing the flexibility of the code shared among all candidates, as well as extended support for the detection of errors preventing the correct operation of cores during experimental testing. Overall, our framework supports (a) fair ranking of candidates in the NIST LWC standardization process from the point of view of their efficiency in hardware before and after the implementation of countermeasures against side-channel attacks, (b) ability to perform benchmarking within the limited time devoted to Round2 and any subsequent rounds of the NIST LWC standardization process, (c) compatibility among implementations of the same algorithm by different designers and (d) fast deployment of the best algorithms in real-life applications.
Upendra Kapshikar, Ayan Mahalanobis
McEliece and Niederreiter cryptosystems are robust and versatile cryptosystems. These cryptosystems work with any linear error-correcting codes. They are popular these days because they can be quantum-secure. In this paper, we study the Niederreiter cryptosystem using quasi-cyclic codes. We prove, if these quasi-cyclic codes satisfy certain conditions, the corresponding Niederreiter cryptosystem is resistant to the hidden subgroup problem using quantum Fourier sampling. Our proof requires the classification of finite simple groups.
Martin R. Albrecht, Alex Davidson, Amit Deo, Nigel P. Smart
Verifiable Oblivious Pseudorandom Functions (VOPRFs) are protocols that allow a client to learn verifiable pseudorandom function (PRF) evaluations on inputs of their choice. The PRF evaluations are computed by a server using their own secret key. The security of the protocol prevents both the server from learning anything about the client's input, and likewise the client from learning anything about the server's key. VOPRFs have many applications including password-based authentication, secret-sharing, anonymous authentication and efficient private set intersection. In this work, we construct the first round-optimal (online) VOPRF protocol that retains security from well-known lattice hardness assumptions. Our protocol requires constructions of non-interactive zero-knowledge arguments of knowledge (NIZKAoK). For analogues of Stern-type proofs in the lattice setting, we show that our VOPRF may be securely instantiated in the quantum random oracle model. We construct such arguments as extensions of prior work in the area of lattice-based zero-knowledge proof systems.
Jiwon Lee, Jaekyoung Choi, Jihye Kim, Hyunok Oh
In the pairing-based zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), there often exists a requirement for the proof system to be combined with encryption. As a typical example, a blockchain-based voting system requires the vote to be confidential (using encryption), while verifying voting validity (using zk-SNARKs). In this kind of combined applications, a general solution is to extend the zk-SNARK circuit to include the encryption code. However, complex cryptographic operations in the encryption algorithm increase the circuit size, which leads to impractically large proving time and the CRS size.
In this paper, we propose Snark-friendly, Additively-homomorphic, and Verifiable Encryption and decryption with Rerandomization or the SAVER, which is a novel approach to detach the encryption from the SNARK circuit. The encryption in SAVER holds many useful properties. It is SNARK-friendly: the encryption is conjoined with an existing pairing-based SNARK, in a way that the encryptor can prove pre-defined properties while encrypting the message apart from the SNARK. It is additively-homomorphic: the ciphertext holds a homomorphic property from the ElGamal-based encryption. It is verifiable encryption: one can verify arbitrary properties of encrypted messages by connecting with the SNARK system. It provides verifiable decryption: anyone without the secret can still verify that the decrypted message is indeed from the given ciphertext. It provides rerandomization: the proof and the ciphertext can be rerandomized as independent objects so that even the encryptor (or prover) herself cannot identify the origin.
For the representative application, we define and construct a voting system scenario and explain the necessity of each property in the SAVER. We prove the IND-CPA-security of the encryption, along with the soundness of encryption and decryption proofs. The experimental results show that the voting system designed from our SAVER yields 0.7s proving/encryption (voting) time, and 16MB-sized CRS for SNARK regardless of the message size.
In this paper, we propose Snark-friendly, Additively-homomorphic, and Verifiable Encryption and decryption with Rerandomization or the SAVER, which is a novel approach to detach the encryption from the SNARK circuit. The encryption in SAVER holds many useful properties. It is SNARK-friendly: the encryption is conjoined with an existing pairing-based SNARK, in a way that the encryptor can prove pre-defined properties while encrypting the message apart from the SNARK. It is additively-homomorphic: the ciphertext holds a homomorphic property from the ElGamal-based encryption. It is verifiable encryption: one can verify arbitrary properties of encrypted messages by connecting with the SNARK system. It provides verifiable decryption: anyone without the secret can still verify that the decrypted message is indeed from the given ciphertext. It provides rerandomization: the proof and the ciphertext can be rerandomized as independent objects so that even the encryptor (or prover) herself cannot identify the origin.
For the representative application, we define and construct a voting system scenario and explain the necessity of each property in the SAVER. We prove the IND-CPA-security of the encryption, along with the soundness of encryption and decryption proofs. The experimental results show that the voting system designed from our SAVER yields 0.7s proving/encryption (voting) time, and 16MB-sized CRS for SNARK regardless of the message size.
Hao Lin, Mingqiang Wang
Ring signatures allow a person to generate a signature on behalf of an ad hoc group, and can hide the true identity of the signer among the group. Repudiable ring signatures are the more strongly defined ring signatures, which can allow every non-signer to prove to others that the signature was not generated by himself.
This paper has two main areas of focus. First, we propose a new requirement for repudiable ring signatures, which is that no one can forge a valid repudiation for others. Second, as a breakthrough, we present the first logarithmic-size repudiable ring signatures which do not rely on a trusted setup or the random oracle model. Specifically, our scheme can be instantiated from standard assumptions and the size of signatures and repudiations only grows logarithmically in the number of ring members.
Besides, our scheme also provides a new construction of logarithmic-size standard ring signatures.
This paper has two main areas of focus. First, we propose a new requirement for repudiable ring signatures, which is that no one can forge a valid repudiation for others. Second, as a breakthrough, we present the first logarithmic-size repudiable ring signatures which do not rely on a trusted setup or the random oracle model. Specifically, our scheme can be instantiated from standard assumptions and the size of signatures and repudiations only grows logarithmically in the number of ring members.
Besides, our scheme also provides a new construction of logarithmic-size standard ring signatures.
Saqib A. Kakvi
The RSA Probabilistic Signature Scheme (RSA-PSS) due to Bellare and Rogaway (EUROCRYPT 1996) is a widely deployed signature scheme. In particular it is a suggested replacement for the deterministic RSA Full Domain Hash (RSA-FDH) by Bellare and Rogaway (ACM CCS 1993) and PKCS#1 v1.5 (RFC 2313), as it can provide stronger security guarantees. It has since been shown by Kakvi and Kiltz (EUROCRYPT 2012, Journal of Cryptology 2018) that RSA-FDH provides similar security to that of RSA-PSS, also in the case when RSA-PSS is not randomized. Recently, Jager, Kakvi and May (ACM CCS 2018) showed that PKCS#1 v1.5 provides comparable security to both RSA-FDH and RSA-PSS. However, all these proofs consider each signature scheme in isolation, where in practice this is not the case. The most interesting case is that in TLS 1.3, PKCS#1 v1.5 signatures are still included for reasons of backwards compatibility, meaning both RSA-PSS and PKCS#1 v1.5 signatures are implemented. To save space, the key material is shared between the two schemes, which means the aforementioned security proofs no longer apply. We investigate the security of this joint usage of key material in the context of Sibling Signatures, which were introduced by Camenisch, Drijvers, and Dubovitskaya (ACM CCS 2017). It must be noted that we consider the standardised version of RSA-PSS (IEEE Standard P1363-2000), which deviates from the original scheme considered in all previous papers. We are able to show that this joint usage is indeed secure, and achieves a security level that closely matches that of PKCS#1 v1.5 signatures and that both schemes can be safely used, if the output lengths of the hash functions are chosen appropriately.
Hao Lin, Mingqiang Wang
Ring signatures allow a person to generate a signature on behalf of an ad hoc group, and can hide the true identity of the signer among the group. Repudiable ring signatures are the more strongly defined ring signatures, which can allow every non-signer to prove to others that the signature was not generated by himself.
This paper has two main areas of focus. First, we propose a new requirement for repudiable ring signatures, which is that no one can forge a valid repudiation for others. Second, as a breakthrough, we present the first logarithmic-size repudiable ring signatures which do not rely on a trusted setup or the random oracle model. Specifically, our scheme can be instantiated from standard assumptions and the size of signatures and repudiations only grows logarithmically in the number of ring members.
Besides, our scheme also provides a new construction of logarithmic-size standard ring signatures.
Jean Claude Bajard, Julien Eynard, Paulo Martins, Leonel Sousa, Vincent Zucca
In a recent work, Al Badawi et al. have noticed a different behaviour of the noise growth in practice between the two RNS variants of BFV from Bajard et al. and Halevi et al. Their experiments, based on the PALISADE and SEAL libraries, have shown that the multiplicative depth reached, in practice, by the first one was considerably smaller than the second one while theoretically equivalent in the worst-case. Their interpretation of this phenomenon was that the approximations used by Bajard et al. made the expansion factor behave differently than what the Central Limit Theorem would predict.
We have realized that this difference actually comes from the implementation of the SmMRq procedure of Bajard et al. in SEAL and PALISADE which is slightly different than what Bajard et al. had proposed. In this note we show that by fixing this small difference, the multiplicative depth of both variants is actually the same in practice.
Jiajun Xin, Pei Huang, Lei Chen, Xin Lai, Xiao Zhang, Wulu Li, Yongcan Wang
The Account-model-based blockchain system gains its popularity due to its ease of use and native support of smart contracts. However, the privacy of users now becomes a major concern and bottleneck of its further adoption because users are not willing to leaving permanent public records online. Conventionally, the privacy of users includes transaction confidentiality and anonymity. While confidentiality can be easily protected using confidential transaction technique, anonymity can be quite challenging in an account-model-based blockchain system, because every transaction in the system inevitably updates transaction sender's as well as receiver's account balance. Even when the privacy of a blockchain system is well-protected, however, regulation becomes a new challenge to counter for further adoption of this system.
In this paper, we introduce a novel transaction-mix protocol, which provides confidentiality and, moreover, anonymity to account-model-based blockchain systems. By leveraging the state of art verifiable shuffle scheme to construct a shuffle of confidential transactions, we build one practical anonymous confidential blockchain system, WaterCarver, upon account model with the help of confidential transactions, verifiable shuffle, and zero-knowledge proofs. We further provide an efficient and robust solution for dynamic regulation with multiple regulators. Our regulation method achieves flexibility and perfect forward secrecy. Experiments show that the overall Transactions Per Second (TPS) of our system can be as large as 600 on a simple desktop.
In this paper, we introduce a novel transaction-mix protocol, which provides confidentiality and, moreover, anonymity to account-model-based blockchain systems. By leveraging the state of art verifiable shuffle scheme to construct a shuffle of confidential transactions, we build one practical anonymous confidential blockchain system, WaterCarver, upon account model with the help of confidential transactions, verifiable shuffle, and zero-knowledge proofs. We further provide an efficient and robust solution for dynamic regulation with multiple regulators. Our regulation method achieves flexibility and perfect forward secrecy. Experiments show that the overall Transactions Per Second (TPS) of our system can be as large as 600 on a simple desktop.
Juan Garay, Aggelos Kiayias, Rafail Ostrovsky, Giorgos Panagiotakos, Vassilis Zikas
Traditional bounds on synchronous Byzantine agreement (BA) and secure multi-party computation (MPC) establish that in absence of a private-coin correlated-randomness setup, such as a PKI, protocols can tolerate up to t<n/3 of the parties being malicious. The introduction of "Nakamoto style'' consensus, based on Proof-of-Work (PoW) blockchains, put forth a somewhat different flavor of BA, showing that even a majority of corrupted parties can be tolerated as long as the majority of the computation resources remain at honest hands. This assumption on honest majority of some resource was also extended to other resources such as stake, space, etc., upon which blockchains achieving Nakamoto-style consensus were built that violated the $t<n/3$ bound in terms of number of party corruptions. The above state of affairs begs the question of whether the seeming mismatch is due to different goals and models, or whether the resource-restricting paradigm can be generically used to circumvent the n/3 lower bound.
In this work we study this question and formally demonstrate how the above paradigm changes the rules of the game in cryptographic definitions. First, we abstract the core properties that the resource-restricting paradigm offers by means of a functionality *wrapper*, in the UC framework, which when applied to a standard point-to-point network restricts the ability (of the adversary) to send new messages. We show that such a wrapped network can be implemented using the resource-restricting paradigm---concretely, using PoWs and honest majority of computing power---and that the traditional $t<n/3$ impossibility results fail when the parties have access to such a network.
We then present constructions for BA and MPC, which given access to such a network tolerate $t<n/2$ corruptions without assuming a private correlated randomness setup, but merely a *fresh* Common Reference String (CRS)---i.e., a CRS which becomes available to the parties at the same time as to the adversary. We also show how to remove this freshness assumption by leveraging the power of a random oracle. Our MPC protocol achieves the standard notion of MPC security, where parties might have dedicated roles, as is for example the case in Oblivious Transfer protocols. This is in contrast to existing solutions basing MPC on PoWs, which associate roles to pseudonyms but do not link these pseudonyms with the actual parties.
In this work we study this question and formally demonstrate how the above paradigm changes the rules of the game in cryptographic definitions. First, we abstract the core properties that the resource-restricting paradigm offers by means of a functionality *wrapper*, in the UC framework, which when applied to a standard point-to-point network restricts the ability (of the adversary) to send new messages. We show that such a wrapped network can be implemented using the resource-restricting paradigm---concretely, using PoWs and honest majority of computing power---and that the traditional $t<n/3$ impossibility results fail when the parties have access to such a network.
We then present constructions for BA and MPC, which given access to such a network tolerate $t<n/2$ corruptions without assuming a private correlated randomness setup, but merely a *fresh* Common Reference String (CRS)---i.e., a CRS which becomes available to the parties at the same time as to the adversary. We also show how to remove this freshness assumption by leveraging the power of a random oracle. Our MPC protocol achieves the standard notion of MPC security, where parties might have dedicated roles, as is for example the case in Oblivious Transfer protocols. This is in contrast to existing solutions basing MPC on PoWs, which associate roles to pseudonyms but do not link these pseudonyms with the actual parties.
Anna Johnston
Random data and the entropy contained within is a critical component of information security.
Minimum entropy is the base measurement defined by NIST and what many of their entropy tests are based on.
However minimum entropy does not satisfy the basic requirements for an entropy measurement in either Shannon's original document or in Renyi's generalization of entropy .
This document suggests a different way forward with a reclassification of entropy into two classes.
With this differentiation, measurement tools are simplified and allow for more accurate assessments of entropy.
Shweta Agrawal, Rachit Garg, Nishant Kumar, Manoj Prabhakaran
We introduce the notion of a Functionally Encrypted Datastore which collects data from multiple data-owners, stores it encrypted on an untrusted server, and allows untrusted clients to make select-and-compute queries on the collected data. Little coordination and no communication is required among the data-owners or the clients. Our security and performance profile is similar to that of conventional searchable encryption systems, while the functionality we offer is significantly richer. The client
specifies a query as a pair (Q,f) where Q is a filtering predicate that selects some subset of the dataset and f is a function on some computable values associated with the selected data. We provide efficient protocols for various functionalities of practical relevance. We demonstrate the utility, efficiency, and scalability of our protocols via extensive experimentation. In particular, we use our protocols to model computations relevant to the Genome-Wide Association Studies such as Minor Allele Frequency
(MAF), Chi-square analysis and Hamming Distance.