International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

07 November 2019

Nir Drucker, Shay Gueron, Dusan Kostic
ePrint Report ePrint Report
The QC-MDPC code-based KEM Bit Flipping Key Encapsulation (BIKE) is one of the Round-2 candidates of the NIST PQC standardization project. It has a variant that is proved to be IND-CCA secure. The proof models the KEM with some black-box ("ideal") primitives. Specifically, the decapsulation invokes an ideal primitive called "decoder", required to deliver its output with a negligible Decoding Failure Rate (DFR). The concrete instantiation of BIKE substitutes this ideal primitive with a new decoding algorithm called "Backflip", that is shown to have the required negligible DFR. However, it runs in a variable number of steps and this number depends on the input and on the key. This paper proposes a decoder that has a negligible DFR and also runs in a fixed (and small) number of steps. We propose that the instantiation of BIKE uses this decoder with our recommended parameters. We study the decoder's DFR as a function of the scheme's parameters to obtain a favorable balance between the communication bandwidth and the number of steps that the decoder runs. In addition, we build a constant-time software implementation of the proposed instantiation, and show that its performance characteristics are quite close to the IND-CPA variant. Finally, we discuss a subtle gap that needs to be resolved for every IND-CCA secure KEM (BIKE included) where the decapsulation has nonzero failure probability: the difference between average DFR and "worst-case" failure probability per key and ciphertext.
Expand
Luca De Feo, Michael Meyer
ePrint Report ePrint Report
We initiate the study of threshold schemes based on the Hard Homogeneous Spaces (HHS) framework of Couveignes. Quantum-resistant HHS based on supersingular isogeny graphs have recently become usable thanks to the record class group precomputation performed for the signature scheme CSI-FiSh. Using the HHS equivalent of the technique of Shamir's secret sharing in the exponents, we adapt isogeny based schemes to the threshold setting. In particular we present threshold versions of the CSIDH public key encryption, and the CSI-FiSh signature schemes. The main highlight is a threshold version of CSI-FiSh which runs almost as fast as the original scheme, for message sizes as low as 1880 B, public key sizes as low as 128 B, and thresholds up to 56; other speed-size-threshold compromises are possible.
Expand
Muhammed F. Esgin, Raymond K. Zhao, Ron Steinfeld, Joseph K. Liu, Dongxi Liu
ePrint Report ePrint Report
We introduce MatRiCT, an efficient RingCT protocol for blockchain confidential transactions, whose security is based on ``post-quantum'' (module) lattice assumptions. The proof length of the protocol is around two orders of magnitude shorter than the existing post-quantum proposal, and scales efficiently to large anonymity sets, unlike the existing proposal. Further, we provide the first full implementation of a post-quantum RingCT, demonstrating the practicality of our scheme. In particular, a typical transaction can be generated in a fraction of a second and verified in about 23 ms on a standard PC. Moreover, we show how our scheme can be extended to provide auditability, where a user can select a particular authority from a set of authorities to reveal her identity. The user also has the ability to select no auditing and all these auditing options may co-exist in the same environment.

The key ingredients, introduced in this work, of MatRiCT are 1) the shortest to date scalable ring signature from standard lattice assumptions with no Gaussian sampling required, 2) a novel balance zero-knowledge proof and 3) a novel extractable commitment scheme from (module) lattices. We believe these ingredients to be of independent interest for other privacy-preserving applications such as secure e-voting. Despite allowing 64-bit precision for transaction amounts, our new balance proof, and thus our protocol, does not require a range proof on a wide range (such as 32- or 64-bit ranges), which has been a major obstacle against efficient lattice-based solutions.

Further, we provide new formal definitions for RingCT-like protocols, where the real-world blockchain setting is captured more closely. The definitions are applicable in a generic setting, and thus are believed to contribute to the development of future confidential transaction protocols in general (not only in the lattice setting).
Expand
Ambili K N, Jimmy Jose
ePrint Report ePrint Report
This paper reports the results of survey done on the architecture and functionalities involved in blockchains. Moreover, it reports the results of comparison between proof-of-work-based blockchains, Bitcoin and Ethereum, against federated consensus-based blockchain, Ripple, and proof-of-validation-based blockchain, Tendermint, along the parameters like peer to peer network setup and maintenance, cryptocurrency involved, details of transaction execution and validation, block creation, block validation and consensus protocol and application development.
Expand
Manoj Kumar
ePrint Report ePrint Report
The lightweight encryption design DoT was published by Patil et al in 2019. It is based on SPN (substitution permutation network) structure. Its block and key size are 64-bit and 128-bit respectively. In this paper, we analyse the security of DoT against differential attack and present a series of differential distinguishers for full-round DOT. Our analysis proves that DoT we can be distinguished from a random permutation with probability equal to 2^62. Diffusion layer of DoT is a combination of byte shuffling, 8-P permutation, 32-bit word shuffling and circular shift operations. We analyse the security of DoT with and without 8-P permutation in its diffusion layer. Our results indicate that DoT provides better resistance to differential attack without using the 8-P permutation.
Expand
Masayuki Abe, Charanjit S. Jutla, Miyako Ohkubo, Jiaxin Pan, Arnab Roy, Yuyu Wang
ePrint Report ePrint Report
Quasi-adaptive non-interactive zero-knowledge proof (QA-NIZK) systems and structure-preserving signature (SPS) schemes are two powerful tools for constructing practical pairing-based cryptographic schemes. Their efficiency directly affects the efficiency of the derived ad- vanced protocols. We construct more efficient QA-NIZK and SPS schemes with tight security reductions. Our QA-NIZK scheme is the first one that achieves both tight simulation soundness and constant proof size (in terms of number of group elements) at the same time, while the recent scheme from Abe et al. (ASIACRYPT 2018) achieved tight security with proof size linearly depending on the size of the language and the witness. Assuming the hardness of the Symmetric eXternal Diffie-Hellman (SXDH) problem, our scheme contains only 14 elements in the proof and remains independent of the size of the language and the witness. Moreover, our scheme has tighter simulation soundness than the previous schemes. Technically, we refine and extend a partitioning technique from a recent SPS scheme (Gay et al., EUROCRYPT 2018). Furthermore, we improve the efficiency of the tightly secure SPS schemes by using a relaxation of NIZK proof system for OR languages, called designated-prover NIZK system. Under the SXDH assumption, our SPS scheme contains 11 group elements in the signature, which is shortest among the tight schemes and is the same as an early non-tight scheme (Abe et al., ASIACRYPT 2012). Compared to the shortest known non-tight scheme (Jutla and Roy, PKC 2017), our scheme achieves tight security at the cost of 5 additional elements. All the schemes in this paper are proven secure based on the Matrix Diffie-Hellman assumptions (Escala et al., CRYPTO 2013). These are a class of assumptions which include the well-known SXDH and DLIN assumptions and provide clean algebraic insights to our constructions. To the best of our knowledge, our schemes achieve the best efficiency among schemes with the same functionality and security properties. This naturally leads to improvement of the efficiency of cryptosystems based on simulation-sound QA-NIZK and SPS.
Expand

05 November 2019

Christian Eder
ePrint Report ePrint Report
In 2019 G\'omez described a new public key cryptography scheme based on ideas from multivariate public key cryptography using hidden irreducible polynomials. We show that the scheme's design has a flaw which lets an attacker recover the private key directly from the public key.
Expand
Adi Akavia, Max Leibovich, Yehezkel S. Resheff, Roey Ron, Moni Shahar, Margarita Vald
ePrint Report ePrint Report
Privacy-preserving machine learning enables secure outsourcing of machine learning tasks to an untrusted service provider (server) while preserving the privacy of the user's data (client). Attaining good concrete efficiency for complicated machine learning tasks, such as training decision trees, is one of the challenges in this area. Prior works on privacy-preserving decision trees required the parties to have comparable computational resources, and instructed the client to perform computation proportional to the complexity of the entire task.

In this work we present new protocols for privacy-preserving decision trees, for both training and prediction, achieving the following desirable properties: 1. Efficiency: the client's complexity is independent of the training-set size during training, and of the tree size during prediction. 2. Security: privacy holds against malicious servers. 3. Practical usability: high accuracy, fast prediction, and feasible training demonstrated on standard UCI datasets, encrypted with fully homomorphic encryption. To the best of our knowledge, our protocols are the first to offer all these properties simultaneously.

The core of our work consists of two technical contributions. First, a new low-degree polynomial approximation for functions, leading to faster protocols for training and prediction on encrypted data. Second, a design of an easy-to-use mechanism for proving privacy against malicious adversaries that is suitable for a wide family of protocols, and in particular, our protocols; this mechanism could be of independent interest.
Expand
Geoffroy Couteau, Bill Roscoe, Peter Ryan
ePrint Report ePrint Report
We describe a new protocol to achieve two party $\epsilon$-fair exchange: at any point in the unfolding of the protocol the difference in the probabilities of the parties having acquired the desired term is bounded by a value $epsilon$; that can be made as small as necessary. Our construction uses oblivious transfer and sidesteps previous impossibility results by using a timed-release encryption, that releases its contents only after some lower bounded time. We show that our protocol can be easily generalized to an $\epsilon$-fair two-party protocol for all functionalities. To our knowledge, this is the first protocol to truly achieve $\epsilon$-fairness for all functionalities. All previous constructions achieving some form of fairness for all functionalities (without relying on a trusted third party) had a strong limitation: the fairness guarantee was only guaranteed to hold if the honest parties are at least as powerful as the corrupted parties and invest a similar amount of resources in the protocol, an assumption which is often not realistic. Our construction does not have this limitation: our protocol provides a clear upper bound on the running time of all parties, and partial fairness holds even if the corrupted parties have much more time or computational power than the honest parties. Interestingly, this shows that a minimal use of timed-release encryption suffices to circumvent an impossibility result of Katz and Gordon regarding $\epsilon$-fair computation for all functionalities, without having to make the (unrealistic) assumption that the honest parties are as computationally powerful as the corrupted parties - this assumption was previously believed to be unavoidable in order to overcome this impossibility result. We present detailed security proofs of the new construction, which are non-trivial and form the core technical contribution of this work.
Expand
Keiichi Iwamura, Ahmad Akmal Aminuddin Mohd Kamal
ePrint Report ePrint Report
In this paper, we describe two new protocols for secure secrecy computation with information theoretical security against the semi-honest adversary and the dishonest majority. Typically, unconditionally secure secrecy computation using the secret sharing scheme is considered impossible under the setting of $n<2k-1$. Therefore, in our previous work, we first took the approach of finding conditions required for secure secrecy computation under the setting of $n<2k-1$ and realized a new technique of conditionally secure secrecy computation. We showed that secrecy computation using a secret sharing scheme can be realized with a semi-honest adversary with the following three preconditions: (1) the value of a secret and a random number used in secrecy multiplication does not include 0; (2) there is a set of shares on 1 that is constructed from random numbers that are unknown to the adversary; and (3) in secrecy computation involving consecutive computation, the position of shares in a set of shares that are handled by each server is fixed. In this paper, we differentiate the relationship between the parameter $n$ of $(k,n)$-threshold secret sharing scheme and N of the number of servers/parties, and realize secrecy computation of multiplication under the setting of $k\le N<2k-1$. In addition, we improve the processing speed of our protocol by dividing the computation process into a Preprocessing Phase and a Computation Phase and shifting the cost for communication to the Preprocessing Phase. This technique allows for information that does not depend on any of the private values, to be generated in advance and significantly reduce the cost of communication in the Computation Phase. For example, for secrecy computation without repetition, the cost for communication can be totally removed in the Computation Phase. As a result, we realize the method for secrecy computation that is faster compared to conventional methods. In addition, our protocols provided solutions for the aforementioned three preconditions and realize secure secrecy computation without any limitation in terms of usability.
Expand
Nir Bitansky, Omri Shmueli
ePrint Report ePrint Report
We construct the first constant-round zero-knowledge classical argument for NP secure against quantum attacks. We assume the existence of quantum fully homomorphic encryption and other standard primitives, known based on the Learning with Errors Assumption for quantum algorithms. As a corollary, we also obtain the first constant-round zero-knowledge quantum argument for QMA.

At the heart of our protocol is a new no-cloning non-black-box simulation technique.
Expand
Hamad Al Shehhi, Emanuele Bellini, Filipe Borba, Florian Caullery, Marc Manzano, Victor Mateu
ePrint Report ePrint Report
The use of rank instead of Hamming metric has been proposed to address the main drawback of code-based cryptography: large key sizes. There exist several Key Encapsulation Mechanisms (KEM) and Public Key Encryption (PKE) schemes using rank metric including some submissions to the NIST call for standardization of Post-Quantum Cryptography. In this work, we present an IND-CCA PKE scheme based on the McEliece adaptation to rank metric proposed by Loidreau at PQC 2017. This IND-CCA PKE scheme based on rank metric does not use a hybrid construction KEM + symmetric encryption. Instead, we take advantage of the bigger message space obtained by the different parameters chosen in rank metric, being able to exchange multiple keys in one ciphertext. Our proposal is designed considering some specific properties of the random error generated during the encryption. We prove our proposal IND-CCA-secure in the QROM by using a security notion called disjoint simulatability introduced by Saito et al. in Eurocrypt 2018. Moreover, we provide security bounds by using the semi-oracles introduced by Ambainis et al.
Expand
Maran van Heesch, Niels van Adrichem, Thomas Attema, Thijs Veugen
ePrint Report ePrint Report
Estimating that in 10 years time quantum computers capable of breaking public-key cryptography currently considered safe could exist, this threat is already eminent for information that require secrecy for more than 10 years. Considering the time required to standardize, implement and update existing networks signifies the urgency of adopting quantum-safe cryptography.

In this work, we investigate the trade-off between network and CPU overhead and the security levels defined by NIST. To do so, we integrate adapted OpenSSL libraries into OpenVPN, and perform experiments on a large variety of quantum-safe algorithms for respectively TLS versions 1.2 and 1.3 using OpenVPN and HTTPS independently. We describe the difficulties we encounter with the integration and we report the experimental performance results, comparing setting up the quantum-safe connection with setting up the connection without additional post-quantum cryptography.
Expand
Panos Kampanakis, Dimitrios Sikeridis
ePrint Report ePrint Report
The recent advances and attention to quantum computing have raised serious security concerns among IT professionals. The ability of a quantum computer to efficiently solve (elliptic curve) discrete logarithm, and integer factorization problems poses a threat to current public key exchange, encryption, and digital signature schemes.

Consequently, in 2016 NIST initiated an open call for quantum-resistant crypto algorithms. This process, currently in its second round, has yielded nine signature algorithms for possible standardization. In this work, we are evaluating two post-quantum signature use-cases and analyze the signature schemes that seem most appropriate for them. We first consider Hash-Based Signatures for software signing and secure boot. We propose suitable parameters and show that their acceptable performance makes them good candidates for image signing. We then evaluate NIST candidate post-quantum signatures for TLS 1.3. We show that Dilithium and Falcon are the best available options but come with an impact on TLS performance. Finally, we present challenges and potential solutions introduced by these algorithms.
Expand
Stanislaw Jarecki, Hugo Krawczyk, Jason Resch
ePrint Report ePrint Report
We introduce Oblivious Key Management Systems (KMS) as a more secure alternative to traditional wrapping-based KMS that form the backbone of key management in large-scale data storage deployments. The new system, that builds on Oblivious Pseudorandom Functions (OPRF), hides keys and object identifiers from the KMS, offers unconditional security for key transport, provides key verifiability, reduces storage, and more. Further, we show how to provide all these features in a distributed threshold implementation that enhances protection against server compromise.

We extend this system with updatable encryption capability that supports key updates (known as key rotation) so that upon the periodic change of OPRF keys by the KMS server, a very efficient update procedure allows a client of the KMS service to non-interactively update all its encrypted data to be decryptable only by the new key. This enhances security with forward and post-compromise security, namely, security against future and past compromises, respectively, of the client's OPRF keys held by the KMS. Additionally, and in contrast to traditional KMS, our solution supports public key encryption and dispenses with any interaction with the KMS for data encryption (only decryption by the client requires such communication).

Our solutions build on recent work on updatable encryption but with significant enhancements applicable to the remote KMS setting. In addition to the critical security improvements, our designs are highly efficient and ready for use in practice. We report on experimental implementation and performance.
Expand
Ameirah al Abdouli, Emanuele Bellini, Florian Caullery, Marc Manzano, Victor Mateu
ePrint Report ePrint Report
Since its invention by McEliece in 1978, cryptography based on Error Correcting Codes (ECC) has suffered from the reputation of not being suitable for constrained devices. Indeed, McEliece's scheme and its variants have large public keys and relatively long ciphertexts. Recent works on these downsides explored the possible use of ECC based on rank metric instead of Hamming metric. These codes were introduced in the late 80's to eliminate errors with repeating patterns, regardless of their Hamming weight. Numerous proposals for the NIST Post-Quantum Cryptography (PQC) competition rely on these codes. It has been proven that lattice-based cryptography and even hash-based signatures can run on lightweight devices, but the question remains for code-based cryptography. In this work, we demonstrate that this is actually possible for rank metric: we have implemented the encryption operation of 5 schemes based on ECC in rank metric and made them run on an Arm Cortex-M0 processor, the smallest Arm processor available. We describe the technical difficulties of porting rank-based cryptography to a resource-constrained device while maintaining decent performance and a suitable level of security against side-channel attacks, especially timing attacks.
Expand
Jens-Peter Kaps, William Diehl, Michael Tempelmeier, Farnoud Farahmand, Ekawat Homsirikamol, Kris Gaj
ePrint Report ePrint Report
In this paper, we propose a comprehensive framework for fair and efficient benchmarking of hardware implementations of lightweight cryptography (LWC). Our framework is centered around the hardware API (Application Programming Interface) for the implementations of lightweight authenticated ciphers, hash functions, and cores combining both functionalities. The major parts of our API include the minimum compliance criteria, interface, and communication protocol supported by the LWC core. The proposed API is intended to meet the requirements of all candidates submitted to the NIST Lightweight Cryptography standardization process, as well as all CAESAR candidates and current authenticated cipher and hash function standards. In order to speed-up the development of hardware implementations compliant with this API, we are making available the LWC Development Package and the corresponding Implementer’s Guide. Equipped with these resources, hardware designers can focus on implementing only a core functionality of a given algorithm. The development package facilitates the communication with external modules, full verification of the LWC core using simulation, and generation of optimized results. The proposed API for lightweight cryptography is a superset of the CAESAR Hardware API, endorsed by the organizers of the CAESAR competition, which was successfully used in the development of over 50 implementations of Round 2 and Round 3 CAESAR candidates. The primary extensions include support for optional hash functionality and the development of cores resistant against side-channel attacks. Similarly, the LWC Development Package is a superset of the part of the CAESAR Development Package responsible for support of Use Case 1 (lightweight) CAESAR candidates. The primary extensions include support for hash functionality, increasing the flexibility of the code shared among all candidates, as well as extended support for the detection of errors preventing the correct operation of cores during experimental testing. Overall, our framework supports (a) fair ranking of candidates in the NIST LWC standardization process from the point of view of their efficiency in hardware before and after the implementation of countermeasures against side-channel attacks, (b) ability to perform benchmarking within the limited time devoted to Round2 and any subsequent rounds of the NIST LWC standardization process, (c) compatibility among implementations of the same algorithm by different designers and (d) fast deployment of the best algorithms in real-life applications.
Expand
Upendra Kapshikar, Ayan Mahalanobis
ePrint Report ePrint Report
McEliece and Niederreiter cryptosystems are robust and versatile cryptosystems. These cryptosystems work with any linear error-correcting codes. They are popular these days because they can be quantum-secure. In this paper, we study the Niederreiter cryptosystem using quasi-cyclic codes. We prove, if these quasi-cyclic codes satisfy certain conditions, the corresponding Niederreiter cryptosystem is resistant to the hidden subgroup problem using quantum Fourier sampling. Our proof requires the classification of finite simple groups.
Expand
Martin R. Albrecht, Alex Davidson, Amit Deo, Nigel P. Smart
ePrint Report ePrint Report
Verifiable Oblivious Pseudorandom Functions (VOPRFs) are protocols that allow a client to learn verifiable pseudorandom function (PRF) evaluations on inputs of their choice. The PRF evaluations are computed by a server using their own secret key. The security of the protocol prevents both the server from learning anything about the client's input, and likewise the client from learning anything about the server's key. VOPRFs have many applications including password-based authentication, secret-sharing, anonymous authentication and efficient private set intersection. In this work, we construct the first round-optimal (online) VOPRF protocol that retains security from well-known lattice hardness assumptions. Our protocol requires constructions of non-interactive zero-knowledge arguments of knowledge (NIZKAoK). For analogues of Stern-type proofs in the lattice setting, we show that our VOPRF may be securely instantiated in the quantum random oracle model. We construct such arguments as extensions of prior work in the area of lattice-based zero-knowledge proof systems.
Expand
Jiwon Lee, Jaekyoung Choi, Jihye Kim, Hyunok Oh
ePrint Report ePrint Report
In the pairing-based zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), there often exists a requirement for the proof system to be combined with encryption. As a typical example, a blockchain-based voting system requires the vote to be confidential (using encryption), while verifying voting validity (using zk-SNARKs). In this kind of combined applications, a general solution is to extend the zk-SNARK circuit to include the encryption code. However, complex cryptographic operations in the encryption algorithm increase the circuit size, which leads to impractically large proving time and the CRS size.

In this paper, we propose Snark-friendly, Additively-homomorphic, and Verifiable Encryption and decryption with Rerandomization or the SAVER, which is a novel approach to detach the encryption from the SNARK circuit. The encryption in SAVER holds many useful properties. It is SNARK-friendly: the encryption is conjoined with an existing pairing-based SNARK, in a way that the encryptor can prove pre-defined properties while encrypting the message apart from the SNARK. It is additively-homomorphic: the ciphertext holds a homomorphic property from the ElGamal-based encryption. It is verifiable encryption: one can verify arbitrary properties of encrypted messages by connecting with the SNARK system. It provides verifiable decryption: anyone without the secret can still verify that the decrypted message is indeed from the given ciphertext. It provides rerandomization: the proof and the ciphertext can be rerandomized as independent objects so that even the encryptor (or prover) herself cannot identify the origin.

For the representative application, we define and construct a voting system scenario and explain the necessity of each property in the SAVER. We prove the IND-CPA-security of the encryption, along with the soundness of encryption and decryption proofs. The experimental results show that the voting system designed from our SAVER yields 0.7s proving/encryption (voting) time, and 16MB-sized CRS for SNARK regardless of the message size.
Expand
◄ Previous Next ►