International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

27 June 2020

Daniel E. Lucani, Lars Nielsen, Claudio Orlandi, Elena Pagnin, Rasmus Vestergaard
ePrint Report ePrint Report
Cloud Storage Providers (CSPs) offer solutions to relieve users from locally storing vast amounts of data, including personal and sensitive ones. While users may desire to retain some privacy on the data they outsource, CSPs are interested in reducing the total storage space by employing compression techniques such as deduplication. We propose a new cryptographic primitive that simultaneously realizes both requirements: Multi-Key Revealing Encryption (MKRE). The goal of MKRE is to disclose the result of a pre-defined function over multiple ciphertexts, even if the ciphertexts were generated using different keys, while revealing nothing else about the data. We present a formal model and a security definition for MKRE and provide a construction of MKRE for generalized deduplication that only uses symmetric key primitives in a black-box way. Our construction allows (a) cloud providers to reduce the storage space by using generalized deduplication to compress encrypted data across users, and (b) each user to maintain a certain privacy level for the outsourced information. Our scheme can be proven secure in the random oracle model (and we argue that this is a necessary evil). We develop a proof-of-concept implementation of our solution. For a test data set, our MKRE construction achieves secure generalized deduplication with a compression ratio of 87% for 1KB file chunks and 82.2% for 8KB chunks. Finally, our experiments show that, compared to generalized deduplication setup with un-encrypted files, adding privacy via MKRE introduces a compression overhead of less than 3% and reduces the storage throughput by at most 6.9%.
Expand
Ehsan Ebrahimi, Céline Chevalier, Marc Kaplan, Michele Minelli
ePrint Report ePrint Report
In this note, we study the security of oblivious transfer protocols in the presence of adversarial superposition queries. We define a security notion for the sender against a corrupted receiver that makes a superposition query. We present an oblivious transfer protocol that is secure against a quantum receiver restricted to a classical query but it is insecure when the receiver makes a quantum query. In addition, we present an OT protocol that resists to the attack presented in this paper. However, we leave presenting a security proof for this protocol as a direction for the future work.
Expand
Mojtaba Bisheh Niasar, Rami El Khatib, Reza Azarderakhsh, Mehran Mozaffari-Kermani
ePrint Report ePrint Report
Abstract--- This paper demonstrates fast and compact implementations of Elliptic Curve Cryptography (ECC) for efficient key agreement over Curve25519. Curve25519 has been recently adopted as a key exchange method for several applications such as connected small devices as well as cloud, and included in the National Institute of Standards and Technology (NIST) recommendations for public key cryptography. This paper presents three different performance level designs including lightweight, area-time efficient, and high-performance architectures. Lightweight hardware implementations are used for several Internet of Things (IoT) applications due to their resources being at premium. Our lightweight architecture utilizes 90% less resources compared to the best previous work while it is still more optimized in term of A\cdot T (area\timestime). For efficient implementation from either time or utilized resources, our area-time efficient architecture can establish almost 7,000 key sessions per second which is 64% faster than the previous works. The area-time efficient architecture uses well scheduled interleaved multiplication combined with a reduction algorithm. Additionally, we offer a fast architecture for high performance applications based on the 4-level Karatsuba method and Carry-Compact Addition (CCA). Our high-performance architecture also outperforms previous work in terms of A\cdot T. The results show 9% and 29% improvement in A\cdot T and A_{d}\cdot T (DSP_count\timestime), respectively. All architectures are variable-base-point implemented on the Xilinx Zynq-7020 FPGA family where performance and implementation metrics are reported and compared. Finally, various side-channel attack countermeasures are embedded in the proposed architectures.
Expand
Ying Guo, Zhenfu Cao, Xiaolei Dong
ePrint Report ePrint Report
In Paillier's scheme, $c=y^{m}x^{n}\,\mathrm{mod}\,n^{2},\,m \in Z_{n},\,x \in Z_{n^{2}}^{*},\,n=PQ$ is a product of two large primes. Damgård and Jurik generalized Paillier's scheme to reduce the ciphertext expansion, $c=y^{m}x^{n^{s}}\,\mathrm{mod}\,n^{s+1},\,m \in Z_{n^{s}},\,x \in Z_{n^{s+1}}^{*}$. In this paper, we propose a new generalization of Paillier's scheme and prove that our scheme is IND-CPA secure under $k$-subgroup assumption for $\Pi_{k}$. Compared to Damgård and Jurik's generalization, our scheme has three advantages. (a)We use the modulus $P^{a}Q^{b}$ instead of $P^{a}Q^{a}$, so it is more general. (b)We use a general $y$ satisfying $P^{a-1} | order_{P^{a}}(y), \,Q^{b-1} | order_{Q^{b}}(y)$ instead of $y=(1+PQ)^{j}x \,\mathrm{mod}\,N$ which is used in Damgård and Jurik's generalization. (c)Our decryption scheme is more efficient than Damgård and Jurik's generalization system.
Expand
Viet Ba Dang, Farnoud Farahmand, Michal Andrzejczak, Kamyar Mohajerani, Duc Tri Nguyen, Kris Gaj
ePrint Report ePrint Report
Performance in hardware has typically played a major role in differentiating among leading candidates in cryptographic standardization efforts. Winners of two past NIST cryptographic contests (Rijndael in case of AES and Keccak in case of SHA-3) were ranked consistently among the two fastest candidates when implemented using FPGAs and ASICs. Hardware implementations of cryptographic operations may quite easily outperform software implementations for at least a subset of major performance metrics, such as speed, power consumption, and energy usage, as well as in terms of security against physical attacks, including side-channel analysis. Using hardware also permits much higher flexibility in trading one subset of these properties for another. A large number of candidates at the early stages of the standardization process makes the accurate and fair comparison very challenging. Nevertheless, in all major past cryptographic standardization efforts, future winners were identified quite early in the evaluation process and held their lead until the standard was selected. Additionally, identifying some candidates as either inherently slow or costly in hardware helped to eliminate a subset of candidates, saving countless hours of cryptanalysis. Finally, early implementations provided a baseline for future design space explorations, paving a way to more comprehensive and fairer benchmarking at the later stages of a given cryptographic competition. In this paper, we first summarize, compare, and analyze results reported by other groups until mid-May 2020, i.e., until the end of Round 2 of the NIST PQC process. We then outline our own methodology for implementing and benchmarking PQC candidates using both hardware and software/hardware co-design approaches. We apply our hardware approach to 6 lattice-based CCA-secure Key Encapsulation Mechanisms (KEMs), representing 4 NIST PQC submissions. We then apply a software-hardware co-design approach to 12 lattice-based CCA-secure KEMs, representing 8 Round 2 submissions. We hope that, combined with results reported by other groups, our study will provide NIST with helpful information regarding the relative performance of a significant subset of Round 2 PQC candidates, assuming that at least their major operations, and possibly the entire algorithms, are off-loaded to hardware.
Expand
Catherine Meadows
ePrint Report ePrint Report
In this paper we develop symbolic and computational representations for a class of cryptographic modes of operation, where the symbolic representations are modeled as elements of a term algebra, and we apply them to the analysis of the computational security of the modes. We derive two different conditions on the symbolic representations, a simple one that is sufficient for security, and a more complex one that is both necessary and sufficient, and prove that these properties hold. The problem of deciding computational security then is reduced to the problem of solving certain disunification problems. We also discuss how these results can be extended.

This paper subsumes and replaces the paper ``Symbolic Security Criteria for Blockwise Adaptive Secure Modes of Encryption." (IACR eprint 2017/1152)
Expand
Mahabir Prasad Jhanwar, Sumanta Sarkar
ePrint Report ePrint Report
Ever since COVID-19 started grasping world’s geographies one by one, countries have been struggling to tackle with this emergency by stretching their healthcare infrastructure beyond the boundary. World is now also trying to find ways to “live with the virus” or coping with the “new normal”. In this effort, contact tracing is thought to be a vital tool which can quickly figure out persons that have come into vicinity of an infected person. Some countries have adopted centralized contact tracing in the perception that it is the most effective and easy solution. Centralized contact tracing has been in the centre of debate as it is a potential tool for launching mass surveillance. So objecting to this, decentralized model has been introduced which gives the control fully to the citizens. However, in decentralized model, the onus is completely on the users to act accordingly if they get a risk notification for coming in close contact with a COVID-19 positive patient. Decentralize model will fail if the large mass of users do not act accordingly after receiving the risk notification. Therefore, a balance needs to strike between the centralized and decentralized models given the socio-economic impact of this pandemic.

In this article, we take a hybrid approach and propose PHyCT that guarantees fail-safe, privacy, and security. This system acts like a decentralized one, where identities of users remain anonymous to the central authority. However, if there is a case of infection, the infected user and the central authority can together only reveal the identities of the users who have come in close contact. This feature enables to handle the situation if there are too many non-compliant users who do not report after getting infection exposure notification. Users who have not come into close contact of any infected person remain anonymous.
Expand
Jean-François Biasse, Sriram Chelleppan, Sherzod Kariev, Noyem Khan, Lynette Menezes, Efe Seyitoglu, Charurut Somboonwit, Attila Yavuz
ePrint Report ePrint Report
We present a privacy-preserving protocol to anonymously collect information about a social graph. The typical application of our protocol is Bluetooth-enabled ``contact-tracing apps'' which record information about proximity between users to infer the risk of propagation of COVID-19 among them. The main contribution of this work is to enable a central server to construct an anonymous graph of interactions between users. This graph gives the central authority insight on the propagation of the virus, and allows it to run predictive models on it while protecting the privacy of users. The main technical tool we use is an accumulator scheme due to Camenisch and Lysyanskaya to keep track of the credentials of users, and prove accumulated credentials in Zero-Knowledge.
Expand
Chaya Ganesh, Claudio Orlandi, Daniel Tschudi, Aviv Zohar
ePrint Report ePrint Report
In proof-of-work based cryptocurrencies, miners invest computing power to maintain a distributed ledger. The drawback of such a consensus protocol is its immense energy consumption. Bitcoin, for example consumes as much energy as a small nation state. To prevent this waste of energy various consensus mechanism such as proof-of-space or proof-of-stake have been proposed. In proof-of-stake, block creators are selected based on the amounts of money they stake instead of their expanded computing power.

In this work we study Virtual ASICs--a generalization of proof-of-stake. Virtual ASICs are essentially a virtualized version of proof-of-work. Miners can buy on-chain virtual mining machines which can be powered by virtual electricity. Similar to their physical counterparts, each powered virtual ASIC has a certain chance to win the right to create the next block. In the boundary case where virtual electricity is free, the protocol corresponds to proof-of-stake using an ASIC token which is separate from the currency itself (the amount of stake equals your virtual computing power). In the other boundary case where virtual computers are free, we get a proof-of-burn equivalent. That is, a consensus mechanism in which miners `burn' money to obtain lottery tickets for the right to create the next block. We provide the cryptographic machinery required to base a consensus protocol on Virtual ASICs, as well as to sell them in sealed-bid auctions on-chain. We ensure that as long as a majority of the miners in the system mine honestly, bids remain both private and binding, and that miners cannot censor the bids of their competitors. To achieve this, we introduce a novel all-or-nothing broadcast functionality in blockchains that is of independent interest.
Expand
Lydia Garms, Siaw-Lynn Ng, Elizabeth A. Quaglia, Giulia Traverso
ePrint Report ePrint Report
When peers rate each other, they may choose to rate inaccurately in order to boost their own reputation or unfairly lower another’s. This could be successfully mitigated by having a reputation server incentivise accurate ratings with a reward. However, assigning rewards becomes a challenge when ratings are anonymous, since the reputation server cannot tell which peers to reward for rating accurately. To address this, we propose an anonymous peer rating system in which users can be rewarded for accurate ratings, and we formally define its model and security requirements. In our system ratings are rewarded in batches, so that users claiming their rewards only reveal they authored one in this batch of ratings. To ensure the anonymity set of rewarded users is not reduced, we also split the reputation server into two entities, the Rewarder, who knows which ratings are rewarded, and the Reputation Holder, who knows which users were rewarded. We give a provably secure construction satisfying all the security properties required. For our construction we use a modification of a Direct Anonymous Attestation scheme to ensure that peers can prove their own reputation when rating others, and that multiple feedback on the same subject can be detected. We then use Linkable Ring Signatures to enable peers to be rewarded for their accurate ratings, while still ensuring that ratings are anonymous. Our work results in a system which allows for accurate ratings to be rewarded, whilst still providing anonymity of ratings with respect to the central entities managing the system.
Expand
Dario Catalano, Georg Fuchsbauer , Azam Soleimanian
ePrint Report ePrint Report
A double-authentication preventing signature (DAPS) scheme is a digital signature scheme equipped with a self-enforcement mechanism. Messages consist of an address and a payload component, and a signer is penalized if she signs two messages with the same addresses but different payloads. The penalty is the disclosure of the signer's signing key. Most of the existing DAPS schemes are proved secure in the random oracle model (ROM), while the efficient ones in the standard model only support address spaces of polynomial size.

We present DAPS schemes that are efficient, secure in the standard model under standard assumptions and support large address spaces. Our main construction builds on vector commitments (VC) and double-trapdoor chameleon hash functions (DTC). We also provide a DAPS realization from Groth-Sahai (GS) proofs that builds on a generic construction by Derler et al., which they instantiate in the ROM. The GS-based construction, while less efficient than our main one, shows that a general yet efficient instantiation of DAPS in the standard model is possible.

An interesting feature of our main construction is that it can be easily modified to guarantee security even in the most challenging setting where no trusted setup is provided. It seems to be the first construction achieving this in the standard model.
Expand
Michel Abdalla , Florian Bourse , Hugo Marival , David Pointcheval , Azam Soleimanian , Hendrik Waldner
ePrint Report ePrint Report
Multi-client functional encryption (MCFE) is an extension of functional encryption (FE) in which the decryption procedure involves ciphertexts from multiple parties. It is particularly useful in the context of data outsourcing and cloud computing where the data may come from different sources and where some data centers or servers may need to perform different types of computation on this data. In order to protect the privacy of the encrypted data, the server, in possession of a functional decryption key, should only be able to compute the final result in the clear, but no other information regarding the encrypted data. In this paper, we consider MCFE schemes supporting encryption labels, which allow the encryptor to limit the amount of possible mix-and-match that can take place during the decryption. This is achieved by only allowing the decryption of ciphertexts that were generated with respect to the same label. This flexible form of FE was already investigated by Abdalla et al. [Asiacrypt 2019] and Chotard et al. [Asiacrypt 2018]. The former provided a general construction based on different standard assumptions, but its ciphertext size grows quadratically with the number of clients. The latter gave a MCFE based on Decisional Diffie-Hellman (DDH) assumption which requires a small inner-product space. In this work, we overcome the deficiency of these works by presenting three constructions with linear-sized ciphertexts based on the Matrix-DDH (MDDH), Decisional Composite Residuosity (DCR) and Learning with Errors (LWE) assumptions in the random-oracle model. We also implement our constructions to evaluate their concrete efficiency.
Expand
Takashi Yamakawa, Mark Zhandry
ePrint Report ePrint Report
In this note, we observe that a proof of quantumness in the random oracle model recently proposed by Brakerski et al. can be seen as a proof of quantum access to a random oracle. Based on this observation, we give the first examples of natural cryptographic schemes that separate classical and quantum random oracle models. Specifically, we construct digital signature and public key encryption schemes that are secure in the classical random oracle model but insecure in the quantum random oracle model assuming the quantum hardness of learning with error problem.
Expand
Sonia Belaïd, Jean-Sébastien Coron, Emmanuel Prouff, Matthieu Rivain, Abdul Rahman Taleb
ePrint Report ePrint Report
The masking countermeasure is among the most powerful countermeasures to counteract side-channel attacks. Leakage models have been exhibited to theoretically reason on the security of such masked implementations. So far, the most widely used leakage model is the probing model defined by Ishai, Sahai, and Wagner at (CRYPTO 2003). While it is advantageously convenient for security proofs, it does not capture an adversary exploiting full leakage traces as, e.g., in horizontal attacks. Those attacks target the multiple manipulations of the same share to reduce noise and recover the corresponding value. To capture a wider class of attacks another model was introduced and is referred to as the random probing model. From a leakage parameter p, each wire of the circuit leaks its value with probability p. While this model much better reflects the physical reality of side channels, it requires more complex security proofs and does not yet come with practical constructions. In this paper, we define the first framework dedicated to the random probing model. We provide an automatic tool, called VRAPS, to quantify the random probing security of a circuit from its leakage probability. We also formalize a composition property for secure random probing gadgets and exhibit its relation to the strong non-interference (SNI) notion used in the context of probing security. We then revisit the expansion idea proposed by Ananth, Ishai, and Sahai (CRYPTO 2018) and introduce a compiler that builds a random probing secure circuit from small base gadgets achieving a random probing expandability property. We instantiate this compiler with small gadgets for which we verify the expected properties directly from our automatic tool. Our construction can tolerate a leakage probability up to 2^−8, against 2^−25 for the previous construction, with a better asymptotic complexity.
Expand
Ashrujit Ghoshal, Joseph Jaeger, Stefano Tessaro
ePrint Report ePrint Report
This paper initiates the study of the provable security of authenticated encryption (AE) in the memory-bounded setting. Recent works – Tessaro and Thiruvengadam (TCC '18), Jaeger and Tessaro (EUROCRYPT '19), and Dinur (EUROCRYPT '20) – focus on confidentiality, and look at schemes for which trade-offs between the attacker's memory and its data complexity are inherent. Here, we ask whether these results and techniques can be lifted to the full AE setting, which additionally asks for integrity.

We show both positive and negative results. On the positive side, we provide tight memory-sensitive bounds for the security of GCM and its generalization, CAU (Bellare and Tackmann, CRYPTO '16). Our bounds apply to a restricted case of AE security which abstracts the deployment within protocols like TLS, and rely on a new memory-tight reduction to corresponding restricted notions of confidentiality and integrity. In particular, our reduction uses an amount of memory which linearly depends on that of the given adversary, as opposed to only imposing a constant memory overhead as in earlier works (Auerbach et al., CRYPTO '17).

On the negative side, we show that a large class of black-box reductions cannot generically lift confidentiality and integrity security to a joint definition of AE security in a memory-tight way.
Expand
Carsten Baum, Bernardo David, Rafael Dowsley, Jesper Buus Nielsen, Sabine Oechsner
ePrint Report ePrint Report
Cryptographic protocols often need to encompass time, e.g. for time outs. Modeling time formally is therefore crucial, as security of protocols can then be proven under more realistic assumptions. This is particularly important when considering composition, as protocols are rarely used in a stand-alone setting. This work extends the recent TARDIS model of abstract composable time (ACT) to the case of multiparty functionalities encompassing communication, publicly verifiable time-based primitives and secure computation. We model delayed multiparty communication through an ACT treatment of broadcast channels and public ledgers. Next, we introduce a publicly verifiable time-lock puzzle (TLP) functionality which we realize by showing that the TLP construction from TARDIS is publicly verifiable. Finally, we show that these new primitives can be used as building blocks for obtaining highly efficient composable randomness beacons and MPC with output independent abort and financial fairness.
Expand
Jung Hee Cheon, Wonhee Cho, Jeong Han Kim, Jiseung Kim
ePrint Report ePrint Report
A weak pseudorandom function (weak PRF) is one of the most important cryptographic primitives for its efficiency although it has lower security than a standard PRF.

Recently, Boneh et al. (TCC'18) introduced two types of new weak PRF candidates, called a basic Mod-2/Mod-3 and alternative Mod-2/Mod-3 weak PRF. They both use the mixture of linear computations defined on different small moduli to satisfy conceptual simplicity, low complexity (depth-2 ${\sf ACC^0}$) and MPC friendliness. In fact, the new candidates are conjectured to be exponentially secure against any adversary that allows exponentially many samples, and a basic Mod-2/Mod-3 weak PRF is the only candidate that satisfies all above features. However, none of direct attacks which focus on a basic and alternative Mod-2/Mod-3 weak PRFs uses their own structures.

In this paper, we investigate weak PRFs in three perspectives; attacks, fixes, and a new analysis to support the hardness conjecture of weak PRFs. We first propose direct attacks for an alternative Mod-2/Mod-3 weak PRF and a basic Mod-2/Mod-3 weak PRF when a circulant matrix is used as a secret key.

For an alternative Mod-2/Mod-3 weak PRF, we prove that the adversary's advantage is at least $2^{-0.105n}$, where $n$ is the size of input space of weak PRF. Similarly, we show that the advantage of our heuristic attack to the weak PRF with a circulant matrix key is larger than $2^{-0.21n}$, which is contrary to previous expectation that `a structured secret key' does not affect the security of a weak PRF. Thus, for optimistic parameter choice $n = 2\lambda$ for the security parameter $\lambda$, parameters should be increased to preserve $\lambda$-bit security when an adversary obtains exponentially many samples.

Next, we provide a simple method for repairing two weak PRFs affected by our attack while preserving the depth-2 ${\sf ACC^0}$ circuit complexity and parameters.

Moreover, we provide an observation and a new analysis to support the exponential hardness conjecture of a basic Mod-2/Mod-3 weak PRF when a secret key is uniformly sampled from $\{0,1\}^{m \times n}$.
Expand
George Teseleanu
ePrint Report ePrint Report
In this paper, we extend the concept of bias amplifiers and show how they can be used to detect badly broken noise sources both in the design and production phases of a true random number generator. We also develop a theoretical framework that supports the experimental results obtained in this paper.
Expand
Haibo Zhou, Rui Zong, Xiaoyang Dong, Keting Jia, Willi Meier
ePrint Report ePrint Report
We introduce an interpolation attack using the \textsc{Moebius Transform}. This can reduce the time complexity to get a linear system of equations for specified intermediate state bits, which is general to cryptanalysis of some ciphers with update function of low algebraic degree. Along this line, we perform an interpolation attack against \textsc{Elephant-Delirium}, a round 2 submission of the ongoing NIST lightweight cryptography project. This is the first third-party cryptanalysis on this cipher. Moreover, we promote the interpolation attack by applying it to the \textbf{Farfalle} pseudo-random constructions \textsc{Kravatte} and \textsc{Xoofff}. Our attacks turn out to be the most efficient method for these ciphers thus far.
Expand
Daniel De Almeida Braga, Pierre-Alain Fouque, Mohamed Sabt
ePrint Report ePrint Report
GlobalPlatform (GP) card specifications are defined for smart cards regarding rigorous security requirements. The increasingly more powerful cards within an open ecosystem of multiple players stipulate that asymmetric-key protocols become necessary. In this paper, we analyze SCP10, which is the Secure Channel Protocol (SCP) that relies on RSA for key exchange and authentication. Our findings are twofold. First, we demonstrate several flaws in the design of SCP10. We discuss the scope of the identified flaws by presenting several attack scenarios in which a malicious attacker can recover all the messages protected by SCP10. We provide a full implementation of these attacks. For instance, an attacker can get the freshly generated session keys in less than three hours. Second, we propose a secure implementation of SCP10 and discuss how it can mitigate the discovered flaws. Finally, we measure the overhead incurred by the implemented countermeasures.
Expand
◄ Previous Next ►