IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
07 July 2020
Florian Unterstein, Tolga Sel, Thomas Zeschg, Nisha Jacob, Michael Tempelmeier, Michael Pehl, Fabrizio De Santis
ePrint Report
Secure Elements (SEs) are hardware trust anchors which provide cryptographic services including secure storage of secret keys and certificates. In long-living devices certain cryptographic functions might get insecure over time, e.g. new implementation attacks or bugs are discovered, and might require to be updated. On FPGAs, partial reconfiguration (PR) offers the opportunity to overcome this issue by replacing buggy or outdated hardware on the fly. This work provides an architecture for an FPGA-based secure element that can be securely updated. The proposed mechanism uses a side-channel protected authenticated encryption with associated data (AEAD) engine for decryption and authentication of partial bitstreams, while the device unique key is generated from a Physical Unclonable Function (PUF). A proof-of-concept of the design is implemented on a Xilinx Zynq-7020 FPGA.
Susumu Kiyoshima
ePrint Report
We give a four-round black-box construction of a commit-and-prove protocol with succinct communication. Our construction is WI and has constant soundness error, and it can be upgraded into a one that is ZK and has negligible soundness error by relying on a round-preserving transformation of Khurana et al. (TCC 2018). Our construction is obtained by combining the MPC-in-the-head technique of Ishai et al. (SICOMP 2009) with the two-round succinct argument of Kalai et al. (STOC 2014), and the main technical novelty lies in the analysis of the soundness---we show that, although the succinct argument of Kalai et al. does not necessarily provide soundness for NP statements, it can be used in the MPC-in-the-head technique for proving the consistency of committed MPC views. Our construction is based on sub-exponentially hard collision-resistant hash functions, two-round PIRs, and two-round OTs.
Michele Ciampi, Roberto Parisella, Daniele Venturi
ePrint Report
We study adaptive security of delayed-input Sigma protocols and non-interactive zero-knowledge (NIZK) proof systems in the common reference string (CRS) model. Our contributions are threefold:
- We exhibit a generic compiler taking any delayed-input Sigma protocol and returning a delayed-input Sigma protocol satisfying adaptive-input special honest-verifier zero-knowledge (SHVZK). In case the initial Sigma protocol also satisfies adaptive-input special soundness, our compiler preserves this property. - We revisit the recent paradigm by Canetti et al. (STOC 2019) for obtaining NIZK proof systems in the CRS model via the Fiat-Shamir transform applied to so-called trapdoor Sigma protocols, in the context of adaptive security. In particular, assuming correlation-intractable hash functions for all sparse relations, we prove that Fiat- Shamir NIZKs satisfy either: (i) Adaptive soundness (and non-adaptive zero-knowledge), so long as the challenge is obtained by hashing both the provers first round and the instance being proven; (ii) Adaptive zero-knowledge (and non-adaptive soundness), so long as the challenge is obtained by hashing only the provers first round, and further assuming that the initial trapdoor Sigma protocol satisfies adaptive-input SHVZK. - We exhibit a generic compiler taking any Sigma protocol and returning a trapdoor Sigma protocol. Unfortunately, this transform does not preserve the delayed-input property of the initial Sigma protocol (if any). To complement this result, we also give yet another compiler taking any delayed-input trapdoor Sigma protocol and returning a delayed-input trapdoor Sigma protocol with adaptive-input SHVZK.
An attractive feature of our first two compilers is that they allow obtaining efficient delayed-input Sigma protocols with adaptive security, and efficient Fiat-Shamir NIZKs with adaptive soundness (and non-adaptive zero-knowledge) in the CRS model. Prior to our work, the latter was only possible using generic NP reductions.
- We exhibit a generic compiler taking any delayed-input Sigma protocol and returning a delayed-input Sigma protocol satisfying adaptive-input special honest-verifier zero-knowledge (SHVZK). In case the initial Sigma protocol also satisfies adaptive-input special soundness, our compiler preserves this property. - We revisit the recent paradigm by Canetti et al. (STOC 2019) for obtaining NIZK proof systems in the CRS model via the Fiat-Shamir transform applied to so-called trapdoor Sigma protocols, in the context of adaptive security. In particular, assuming correlation-intractable hash functions for all sparse relations, we prove that Fiat- Shamir NIZKs satisfy either: (i) Adaptive soundness (and non-adaptive zero-knowledge), so long as the challenge is obtained by hashing both the provers first round and the instance being proven; (ii) Adaptive zero-knowledge (and non-adaptive soundness), so long as the challenge is obtained by hashing only the provers first round, and further assuming that the initial trapdoor Sigma protocol satisfies adaptive-input SHVZK. - We exhibit a generic compiler taking any Sigma protocol and returning a trapdoor Sigma protocol. Unfortunately, this transform does not preserve the delayed-input property of the initial Sigma protocol (if any). To complement this result, we also give yet another compiler taking any delayed-input trapdoor Sigma protocol and returning a delayed-input trapdoor Sigma protocol with adaptive-input SHVZK.
An attractive feature of our first two compilers is that they allow obtaining efficient delayed-input Sigma protocols with adaptive security, and efficient Fiat-Shamir NIZKs with adaptive soundness (and non-adaptive zero-knowledge) in the CRS model. Prior to our work, the latter was only possible using generic NP reductions.
Arnold G. Reinhold
ePrint Report
Terakey is an encryption system whose confidentiality can be demonstrated from first principles, without making assumptions about the computational difficulty of certain mathematical problems. It employs a key that is much larger than the anticipated volume of message traffic. It is based on the one-time pad, but addresses the risk of key reuse stochastically. Conventional cryptographic techniques can be used to ameliorate infrequent byte collisions. The large size of the key reduces the risk of key exfiltration and facilitates physical security measures to maintain a secure chain of control for the key. Terakey also serves as a potential alternative for comparison with quantum key distribution technology, arguably providing equivalent security with fewer complications.
Aude Le Gluher, Pierre-Jean Spaenlehauer, Emmanuel Thomé
ePrint Report
The classical heuristic complexity of the Number Field Sieve (NFS) is the solution of an optimization problem that involves an unknown function, usually noted $o(1)$ and called $\xi(N)$ throughout this paper, which tends to zero as the entry $N$ grows. The aim of this paper is to find optimal asymptotic choices of the parameters of NFS as $N$ grows, in order to minimize its heuristic asymptotic computational cost. This amounts to minimizing a function of the parameters of NFS bound together by a non-linear constraint. We provide precise asymptotic estimates of the minimizers of this optimization problem, which yield refined formulas for the asymptotic complexity of NFS. One of the main outcomes of this analysis is that $\xi(N)$ has a very slow rate of convergence: We prove that it is equivalent to $4{\log}{\log}{\log}\,N/(3{\log}{\log}\,N)$. Moreover, $\xi(N)$ has an unpredictable behavior for practical estimates of the complexity. Indeed, we provide an asymptotic series expansion of $\xi$ and numerical experiments indicate that this series starts converging only for $N>\exp(\exp(25))$, far beyond the practical range of NFS. This raises doubts on the relevance of NFS running time estimates that are based on setting $\xi=0$ in the asymptotic formula.
Ashoka SB, Lakshmikanth D
ePrint Report
In recent years security has become an important role in the field of Defense, Business, Medical
and Industries.Different types of cryptography algorithms has been implemented in order to provide security with high performance. A hash function is a cryptography algorithm without a key such as MD5, SHA-family. Secure hash algorithm which is standardized by NIST as secured hashing in FIPS. In this paper we mainly focus on code optimization and increase the performance of SHA-512. To optimize and increase the performance of SHA-512 we have modified the algorithm and proposed Modified Secure hash algorithm(MSHA-512 ). It is a one-way hash function that can process a message to produce a condensed representation called a message digest.In this proposed algorithm within the limited rounds (40 rounds instead of 80 rounds) we can obtain exact result which is same as traditional SHA-512 algorithm. By using MSHA-512 algorithm we can reduce the time upto 50% for the same process, which increases the performance of the algorithm and helps to increase the flexibility of server in running streams. The MSHA-512 generates the same hash code as we generate in the SHA-512 for all different size
of inputs. MSHA-512 is developed and designed to overcome the time complexity of SHA-512 and increase the performance of it by reducing the rounds.
Daniel Adkins, Archita Agarwal, Seny Kamara, Tarik Moataz
ePrint Report
Blockchain databases are storage systems that combine properties of blockchains and databases like decentralization, tamper-proofness, low query latency and support for complex queries. Blockchain databases are an emerging and important class of blockchain technology that is critical to the development of non-trivial smart contracts, distributed applications and decentralized marketplaces.
In this work, we consider the problem of designing end-to-end encrypted blockchain databases to support the development of decentralized applications that need to store and query sensitive data. In particular, we show how to design what we call blockchain encrypted multi-maps (EMM) which can be used to instantiate various kinds of NoSQL blockchain databases like key-value stores or document databases. We propose three blockchain EMM constructions, each of which achieves different tradeoffs between query, add and delete efficiency. All of our constructions are legacy-friendly in the sense that they can be implemented on top of any existing blockchain. This is particularly challenging since blockchains do not support data deletion.
We implemented our schemes on the Algorand blockchain and evaluated their concrete efficiency empirically. Our experiments show that they are practical.
Xuan Thanh Do, Duong Hieu Phan, Moti Yung
ePrint Report
Broadcast Encryption is a fundamental primitive supporting sending a secure message to any chosen target set of $N$ users.
While many efficient constructions are known, understanding the efficiency possible for an ``Anonymous Broadcast Encryption'' (ANOBE), i.e., one which can hide the target set itself, is quite open. The best solutions by Barth, Boneh, and Waters ('06) and Libert, Paterson, and Quaglia ('12) are built on public key encryption (PKE) and their ciphertext sizes are, in fact, $N$ times that of the underlying PKE (rate=$N$). Kiayias and Samary ('12), in turn, showed a lower bound showing that such rate is the best possible if $N$ is an independent unbounded parameter. However, when considering certain user set size bounded by a system parameter (e.g., the security parameter), the problem remains interesting. We consider the problem of comparing ANOBE with PKE under the same assumption. We call such schemes Anonymous Broadcast Encryption for Bounded Universe -- AnoBEB.
We first present an AnoBEB construction for up to $k$ users from LWE assumption, where $k$ is bounded by the scheme security parameter. The scheme does not grow with the parameter and beat the PKE method. Actually, our scheme is as efficient as the underlying LWE public-key encryption; namely, the rate is, in fact, $1$ and thus optimal. The scheme is achieved easily by an observation about an earlier scheme with a different purpose.
More interestingly, we move on to employ the new AnoBEB in other multimedia broadcasting methods and, as a second contribution, we introduce a new approach to construct an efficient ``Trace and Revoke scheme'' which combines the functionalites of revocation and of tracing people (called traitors) who in a broadcasting schemes share their keys with the adversary which, in turn, generates a pirate receiver. Note that, as was put forth by Kiayias and Yung (EUROCRYPT '02), combinatorial traitor tracing schemes can be constructed by combining a system for small universe, integrated via an outer traceability codes (collusion-secure code or identifying parent property (IPP) code). There were many efficient traitor tracing schemes from traceability codes, but no known scheme supports revocation as well. Our new approach integrates our AnoBEB system with a Robust IPP code, introduced by Barg and Kabatiansky (IEEE IT '13). This shows an interesting use for robust IPP in cryptography. The robust IPP codes were only implicitly shown by an existence proof. In order to make our technique concrete, we propose two explicit instantiations of robust IPP codes. Our final construction gives the most efficient trace and revoke scheme in the bounded collusion model.
We first present an AnoBEB construction for up to $k$ users from LWE assumption, where $k$ is bounded by the scheme security parameter. The scheme does not grow with the parameter and beat the PKE method. Actually, our scheme is as efficient as the underlying LWE public-key encryption; namely, the rate is, in fact, $1$ and thus optimal. The scheme is achieved easily by an observation about an earlier scheme with a different purpose.
More interestingly, we move on to employ the new AnoBEB in other multimedia broadcasting methods and, as a second contribution, we introduce a new approach to construct an efficient ``Trace and Revoke scheme'' which combines the functionalites of revocation and of tracing people (called traitors) who in a broadcasting schemes share their keys with the adversary which, in turn, generates a pirate receiver. Note that, as was put forth by Kiayias and Yung (EUROCRYPT '02), combinatorial traitor tracing schemes can be constructed by combining a system for small universe, integrated via an outer traceability codes (collusion-secure code or identifying parent property (IPP) code). There were many efficient traitor tracing schemes from traceability codes, but no known scheme supports revocation as well. Our new approach integrates our AnoBEB system with a Robust IPP code, introduced by Barg and Kabatiansky (IEEE IT '13). This shows an interesting use for robust IPP in cryptography. The robust IPP codes were only implicitly shown by an existence proof. In order to make our technique concrete, we propose two explicit instantiations of robust IPP codes. Our final construction gives the most efficient trace and revoke scheme in the bounded collusion model.
Jiayu Qiang, Yi Deng
ePrint Report
In most scenarios of Private Set Intersection (PSI) computed on a cloud server, the client has a smaller set size and lower computation ability than that of the cloud server, which is known as the unbalanced setting. We use Torus Fully Homomorphic Encryption (TFHE) for the first time instead of the leveled ones to construct a PSI protocol. More precisely, we mainly focus on an adaptive and dynamic setting since the server may provide services to multiple clients at the same time and its data set is updated in real time.
We use TFHE to construct an adaptive PSI for unbounded items with a lower communication complexity of $O(|Y|)$ than [19](CCS17), where $Y$ is the length of client's sets) . TFHE support arbitrary depth of homomorphic operations, which avoids those optimizations[19]made to reduce the depth of the circuit, resulting in additional computation and communication complexity. We propose a basic protocol that can efficiently compute the intersection with small items
and then we apply a partition technique to our full protocol in order to support unbounded items. We also achieve a flexible dynamic protocol by adjusting our parameters into an adaptive setting, which can further reduce the communication cost of our PSI protocol, especially in cloud computing scenarios mentioned above.
Fynn Dallmeier, Jan P. Drees, Kai Gellert, Tobias Handirk, Tibor Jager, Jonas Klauke, Simon Nachtigall, Timo Renzelmann, Rudi Wolf
ePrint Report
Modern cryptographic protocols, such as TLS 1.3 and QUIC, can send cryptographically protected data in "zero round-trip times (0-RTT)", that is, without the need for a prior interactive handshake.
Such protocols meet the demand for communication with minimal latency, but those currently deployed in practice achieve only rather weak security properties, as they may not achieve forward security for the first transmitted payload message and require additional countermeasures against replay attacks.
Recently, 0-RTT protocols with full forward security and replay resilience have been proposed in the academic literature. These are based on puncturable encryption, which uses rather heavy building blocks, such as cryptographic pairings. Some constructions were claimed to have practical efficiency, but it is unclear how they compare concretely to protocols deployed in practice, and we currently do not have any benchmark results that new protocols can be compared with.
We provide the first concrete performance analysis of a modern 0-RTT protocol with full forward security, by integrating the Bloom Filter Encryption scheme of Derler et al. (EUROCRYPT 2018) in the Chromium QUIC implementation and comparing it to Google's original QUIC protocol.
We find that for reasonable deployment parameters, the server CPU load increases approximately by a factor of eight and the memory consumption on the server increases significantly, but stays below 400 MB even for medium-scale deployments that handle up to 50K connections per day. The difference of the size of handshake messages is small enough that transmission time on the network is identical, and therefore not significant.
We conclude that while current 0-RTT protocols with full forward security come with significant computational overhead, their use in practice is not infeasible, and may be used in applications where the increased CPU and memory load can be tolerated in exchange for full forward security and replay resilience on the cryptographic protocol level. Our results also serve as a first benchmark that can be used to assess the efficiency of 0-RTT protocols potentially developed in the future.
Such protocols meet the demand for communication with minimal latency, but those currently deployed in practice achieve only rather weak security properties, as they may not achieve forward security for the first transmitted payload message and require additional countermeasures against replay attacks.
Recently, 0-RTT protocols with full forward security and replay resilience have been proposed in the academic literature. These are based on puncturable encryption, which uses rather heavy building blocks, such as cryptographic pairings. Some constructions were claimed to have practical efficiency, but it is unclear how they compare concretely to protocols deployed in practice, and we currently do not have any benchmark results that new protocols can be compared with.
We provide the first concrete performance analysis of a modern 0-RTT protocol with full forward security, by integrating the Bloom Filter Encryption scheme of Derler et al. (EUROCRYPT 2018) in the Chromium QUIC implementation and comparing it to Google's original QUIC protocol.
We find that for reasonable deployment parameters, the server CPU load increases approximately by a factor of eight and the memory consumption on the server increases significantly, but stays below 400 MB even for medium-scale deployments that handle up to 50K connections per day. The difference of the size of handshake messages is small enough that transmission time on the network is identical, and therefore not significant.
We conclude that while current 0-RTT protocols with full forward security come with significant computational overhead, their use in practice is not infeasible, and may be used in applications where the increased CPU and memory load can be tolerated in exchange for full forward security and replay resilience on the cryptographic protocol level. Our results also serve as a first benchmark that can be used to assess the efficiency of 0-RTT protocols potentially developed in the future.
Jacqueline Brendel, Cas Cremers, Dennis Jackson, Mang Zhao
ePrint Report
A standard requirement for a signature scheme is that it is existentially unforgeable under chosen message attacks (EUF-CMA), alongside other properties of interest such as strong unforgeability (SUF-CMA), and resilience against key substitution attacks.
Remarkably, no detailed proofs have ever been given for these security properties for EdDSA, and in particular its Ed25519 instantiations. Ed25519 is one of the most efficient and widely used signature schemes, and different instantiations of Ed25519 are used in protocols such as TLS 1.3, SSH, Tor, ZCash, and WhatsApp/Signal. The differences between these instantiations are subtle, and only supported by informal arguments, with many works assuming results can be directly transferred from Schnorr signatures. Similarly, several proofs of protocol security simply assume that Ed25519 satisfies properties such as EUF-CMA or SUF-CMA.
In this work we provide the first detailed analysis and security proofs of Ed25519 signature schemes. While the design of the schemes follows the well-established Fiat-Shamir paradigm, which should guarantee existential unforgeability, there are many side cases and encoding details that complicate the proofs, and all other security properties needed to be proven independently.
Our work provides scientific rationale for choosing among several Ed25519 variants and understanding their properties, fills a much needed proof gap in modern protocol proofs that use these signatures, and supports further standardisation efforts.
Remarkably, no detailed proofs have ever been given for these security properties for EdDSA, and in particular its Ed25519 instantiations. Ed25519 is one of the most efficient and widely used signature schemes, and different instantiations of Ed25519 are used in protocols such as TLS 1.3, SSH, Tor, ZCash, and WhatsApp/Signal. The differences between these instantiations are subtle, and only supported by informal arguments, with many works assuming results can be directly transferred from Schnorr signatures. Similarly, several proofs of protocol security simply assume that Ed25519 satisfies properties such as EUF-CMA or SUF-CMA.
In this work we provide the first detailed analysis and security proofs of Ed25519 signature schemes. While the design of the schemes follows the well-established Fiat-Shamir paradigm, which should guarantee existential unforgeability, there are many side cases and encoding details that complicate the proofs, and all other security properties needed to be proven independently.
Our work provides scientific rationale for choosing among several Ed25519 variants and understanding their properties, fills a much needed proof gap in modern protocol proofs that use these signatures, and supports further standardisation efforts.
Kwangsu Lee
ePrint Report
In multi-client functional encryption (MC-FE) for predicate queries, clients generate ciphertexts of attributes $x_1, \ldots, x_n$ binding with a time period $T$ and store them on a cloud server, and the cloud server receives a token corresponding to a predicate $f$ from a trusted center and learns whether $f(x_1, \ldots, x_n) = 1$ or not by running the query algorithm on the multiple ciphertexts of the same time period. MC-FE for predicates can be used for a network event or medical data monitoring system based on time series data gathered by multiple clients. In this paper, we propose efficient MC-FE schemes that support conjunctive equality or range queries on encrypted data in the multi-client settings. First, we propose an efficient multi-client hidden vector encryption (MC-HVE) scheme in bilinear groups and prove the selective strong attribute hiding security with static corruptions. Our MC-HVE scheme is very efficient since a token is composed of four group elements, a ciphertext consists of $O(n)$ group elements, and the query algorithm only requires four pairing operations. Second, we propose an efficient multi-client range query encryption (MC-RQE) scheme and prove the weak attribute hiding security with static corruptions. Since our MC-RQE scheme uses a binary tree, it is efficient since a ciphertext consists of $O(n \log D)$ group elements and a token consists of $O(n \log D)$ group elements where $D$ is the maximum value of the range.
Michail Moraitis, Elena Dubrova
ePrint Report
Bitstream reverse engineering is traditionally associated
with Intellectual Property (IP) theft. Another, less known,
threat deriving from that is bitstream modification attacks. It
has been shown that the secret key can be extracted from
FPGA implementations of cryptographic algorithms by injecting
faults directly into the bitstream. Such bitstream modification
attacks rely on changing the content of Look Up Tables (LUTs).
Therefore, related countermeasures aim to make the task of
identifying a LUT more difficult (e.g. by masking its content).
However, recent advances in FPGA reverse engineering revealed
information on how interconnects are encoded in the bitstream of
Xilinx 7 series FPGAs. In this paper, we show that this knowledge
can be used to break or weaken existing countermeasures, as
well as improve existing attacks. Furthermore, a straightforward
attack that re-routes the key to an output pin becomes possible.
We demonstrate our claims on an FPGA implementation of
SNOW 3G stream cipher. The presented results show that there
is an urgent need for stronger bitstream protection methods.
06 July 2020
Tim Beyne, Anne Canteaut, Gregor Leander, María Naya-Plasencia, Léo Perrin, Friedrich Wiemer
ePrint Report
In this report, we cryptanalyse the Rescue hash function. In particular, we look at linear and differential cryptanalysis of Rescue, how multiplicative subgroups are propagated by the round function and at rebound attacks. Overall, we do not find any direct threat to the security of Rescue.
Willy Quach
ePrint Report
We build a two-round, UC-secure oblivious transfer protocol (OT) in the common reference string (CRS) model under the Learning with Errors assumption (LWE) with sub-exponential modulus-to-noise ratio. We do so by instantiating the dual-mode encryption framework of Peikert, Vaikuntanathan and Waters (CRYPTO'08). The resulting OT can be instantiated in either one of two modes: one providing statistical sender security, and the other statistical receiver security. Furthermore, our scheme allows the sender and the receiver to reuse the CRS across arbitrarily many executions of the protocol.
To the best of our knowledge, this gives the first construction of a UC-secure OT from LWE that achieves both statistical receiver security and unbounded reusability of the CRS. For comparison, there was, until recently, no such construction from LWE satisfying either one of these two properties.
In particular, the construction of UC-secure OT from LWE of Peikert, Vaikuntanathan and Waters only provides computational receiver security and bounded reusability of the CRS.
Our main technical contribution is a public-key encryption scheme from LWE where messy public keys (under which encryptions hide the underlying message statistically) can be recognized in time essentially independent of the LWE modulus $q$.
Our main technical contribution is a public-key encryption scheme from LWE where messy public keys (under which encryptions hide the underlying message statistically) can be recognized in time essentially independent of the LWE modulus $q$.
Christian Badertscher, Alexandru Cojocaru, Léo Colisson, Elham Kashefi, Dominik Leichtle, Atul Mantri, Petros Wallden
ePrint Report
Secure delegated quantum computing is a two-party cryptographic primitive, where a computationally weak client wishes to delegate an arbitrary quantum computation to an untrusted quantum server in a privacy-preserving manner. Communication via quantum channels is typically assumed such that the client can establish the necessary correlations with the server to securely perform the given task. This has the downside that all these protocols cannot be put to work for the average user unless a reliable quantum network is deployed.
Therefore the question becomes relevant whether it is possible to rely solely on classical channels between client and server and yet benefit from its quantum capabilities while retaining privacy. Classical-client remote state preparation ($\sf{RSP}_{CC}$) is one of the promising candidates to achieve this because it enables a client, using only classical communication resources, to remotely prepare a quantum state. However, the privacy loss incurred by employing $\sf{RSP}_{CC}$ as sub-module to avoid quantum channels is unclear.
In this work, we investigate this question using the Constructive Cryptography framework by Maurer and Renner (ICS'11). We first identify the goal of $\sf{RSP}_{CC}$ as the construction of ideal \RSP resources from classical channels and then reveal the security limitations of using $\sf{RSP}_{CC}$ in general and in specific contexts:
1. We uncover a fundamental relationship between constructing ideal $\sf{RSP}$ resources (from classical channels) and the task of cloning quantum states with auxiliary information. Any classically constructed ideal $\sf{RSP}$ resource must leak to the server the full classical description (possibly in an encoded form) of the generated quantum state, even if we target computational security only. As a consequence, we find that the realization of common $\sf{RSP}$ resources, without weakening their guarantees drastically, is impossible due to the no-cloning theorem.
2. The above result does not rule out that a specific $\sf{RSP}_{CC}$ protocol can replace the quantum channel at least in some contexts, such as the Universal Blind Quantum Computing ($\sf{UBQC}$) protocol of Broadbent et al. (FOCS 09). However, we show that the resulting $\sf{UBQC}$ protocol cannot maintain its proven composable security as soon as $\sf{RSP}_{CC}$ is used as a subroutine.
3. We show that replacing the quantum channel of the above $\sf{UBQC}$ protocol by the $\sf{RSP}_{CC}$ protocol QFactory of Cojocaru et al. (Asiacrypt 19), preserves the weaker, game-based, security of $\sf{UBQC}$.
Therefore the question becomes relevant whether it is possible to rely solely on classical channels between client and server and yet benefit from its quantum capabilities while retaining privacy. Classical-client remote state preparation ($\sf{RSP}_{CC}$) is one of the promising candidates to achieve this because it enables a client, using only classical communication resources, to remotely prepare a quantum state. However, the privacy loss incurred by employing $\sf{RSP}_{CC}$ as sub-module to avoid quantum channels is unclear.
In this work, we investigate this question using the Constructive Cryptography framework by Maurer and Renner (ICS'11). We first identify the goal of $\sf{RSP}_{CC}$ as the construction of ideal \RSP resources from classical channels and then reveal the security limitations of using $\sf{RSP}_{CC}$ in general and in specific contexts:
1. We uncover a fundamental relationship between constructing ideal $\sf{RSP}$ resources (from classical channels) and the task of cloning quantum states with auxiliary information. Any classically constructed ideal $\sf{RSP}$ resource must leak to the server the full classical description (possibly in an encoded form) of the generated quantum state, even if we target computational security only. As a consequence, we find that the realization of common $\sf{RSP}$ resources, without weakening their guarantees drastically, is impossible due to the no-cloning theorem.
2. The above result does not rule out that a specific $\sf{RSP}_{CC}$ protocol can replace the quantum channel at least in some contexts, such as the Universal Blind Quantum Computing ($\sf{UBQC}$) protocol of Broadbent et al. (FOCS 09). However, we show that the resulting $\sf{UBQC}$ protocol cannot maintain its proven composable security as soon as $\sf{RSP}_{CC}$ is used as a subroutine.
3. We show that replacing the quantum channel of the above $\sf{UBQC}$ protocol by the $\sf{RSP}_{CC}$ protocol QFactory of Cojocaru et al. (Asiacrypt 19), preserves the weaker, game-based, security of $\sf{UBQC}$.
Osman Biçer, Alptekin Küpçü
ePrint Report
E-cash and cryptocurrency schemes have been a focus of applied cryptography for a long time. However, we acknowledge the continuing need for a cryptographic protocol that provides global scale, decentralized, secure, and fair delivery of donations. Such a protocol would replace central trusted entities (e.g., charity organizations) and guarantee the privacy of the involved parties (i.e., donors and recipients of the donations). In this work, we target this online donation problem and propose a practical solution for it. First, we propose a novel decentralized e-donation framework, along with its operational components and security definitions. Our framework relies on a public ledger that can be realized via a distributed blockchain. Second, we instantiate our e-donation framework with a practical scheme employing privacy-preserving cryptocurrencies and attribute-based signatures. Third, we provide implementation results showing that our operations have feasible computation and communication costs. Finally, we prove the security of our e-donation scheme via formal reductions to the security of the underlying primitives.
Luka Music, Céline Chevalier, Elham Kashefi
ePrint Report
It is of folkloric belief that the security of classical cryptographic protocols is automatically broken if the Adversary is allowed to perform superposition queries and the honest players forced to perform actions coherently on quantum states. Another widely held intuition is that enforcing measurements on the exchanged messages is enough to protect protocols from these attacks.
However, the reality is much more complex. Security models dealing with superposition attacks only consider unconditional security. Conversely, security models considering computational security assume that all supposedly classical messages are measured, which forbids by construction the analysis of superposition attacks. Boneh and Zhandry have started to study the quantum computational security for classical primitives in their seminal work at Crypto'13, but only in the single-party setting. To the best of our knowledge, an equivalent model in the multiparty setting is still missing.
In this work, we propose the first computational security model considering superposition attacks for multiparty protocols. We show that our new security model is satisfiable by proving the security of the well-known One-Time-Pad protocol and give an attack on a variant of the equally reputable Yao Protocol for Secure Two-Party Computations. The post-mortem of this attack reveals the precise points of failure, yielding highly counter-intuitive results: Adding extra classical communication, which is harmless for classical security, can make the protocol become subject to superposition attacks. We use this newly imparted knowledge to construct the first concrete protocol for Secure Two-Party Computation that is resistant to superposition attacks. Our results show that there is no straightforward answer to provide for either the vulnerabilities of classical protocols to superposition attacks or the adapted countermeasures.
However, the reality is much more complex. Security models dealing with superposition attacks only consider unconditional security. Conversely, security models considering computational security assume that all supposedly classical messages are measured, which forbids by construction the analysis of superposition attacks. Boneh and Zhandry have started to study the quantum computational security for classical primitives in their seminal work at Crypto'13, but only in the single-party setting. To the best of our knowledge, an equivalent model in the multiparty setting is still missing.
In this work, we propose the first computational security model considering superposition attacks for multiparty protocols. We show that our new security model is satisfiable by proving the security of the well-known One-Time-Pad protocol and give an attack on a variant of the equally reputable Yao Protocol for Secure Two-Party Computations. The post-mortem of this attack reveals the precise points of failure, yielding highly counter-intuitive results: Adding extra classical communication, which is harmless for classical security, can make the protocol become subject to superposition attacks. We use this newly imparted knowledge to construct the first concrete protocol for Secure Two-Party Computation that is resistant to superposition attacks. Our results show that there is no straightforward answer to provide for either the vulnerabilities of classical protocols to superposition attacks or the adapted countermeasures.
Marc Abboud, Thomas Prest
ePrint Report
In the recent years, some security proofs in cryptography have known significant improvements by replacing the
statistical distance with alternative divergences. We continue this line of research, both at a theoretical and practical
level. On the theory side, we propose a new cryptographic divergence with quirky properties. On the practical side, we
propose new applications of alternative divergences: circuit-private FHE and prime number generators. More precisely, we provide the first formal security proof of the prime number generator PRIMEINC (Brandt and Damgård, CRYPTO 1992), and improve by an order of magnitude the efficiency of a prime number generator by Fouque and Tibouchi (ICALP 2014) and the washing machine technique by Ducas and Stehlé (EUROCRYPT 2016) for circuit-private FHE.
Tal Moran, Daniel Wichs
ePrint Report
An incompressible encoding can probabilistically encode some data $m$ into a codeword $c$, which is not much larger. Anyone can decode the codeword $c$ to recover the original data $m$. However, the codeword $c$ cannot be efficiently compressed, even if the original data $m$ is given to the decompression procedure on the side. In other words, $c$ is an efficiently decodable representation of $m$, yet is computationally incompressible even given $m$. An incompressible encoding is composable if many encodings cannot be simultaneously compressed.
The recent work of Damg\aa{}rd, Ganesh and Orlandi (CRYPTO '19) defined a variant of incompressible encodings as a building block for ``proofs of replicated storage''. They constructed incompressible encodings in an ideal permutation model, but it was left open if they can be constructed under standard assumptions, or even in the more basic random-oracle model. In this work, we undertake the comprehensive study of incompressible encodings as a primitive of independent interest and give new constructions, negative results and applications:
* We construct incompressible encodings in the common random string (CRS) model under either Decisional Composite Residuosity (DCR) or Learning with Errors (LWE). However, the construction has several drawbacks: (1) it is not composable, (2) it only achieves selective security, and (3) the CRS is as long as the data $m$. * We leverage the above construction to also get a scheme in the random-oracle model, under the same assumptions, that avoids all of the above drawbacks. Furthermore, it is significantly more efficient than the prior ideal-model construction. * We give black-box separations, showing that incompressible encodings in the plain model cannot be proven secure under any standard hardness assumption, and incompressible encodings in the CRS model must inherently suffer from all of the drawbacks above. * We give a new application to ``big-key cryptography in the bounded-retrieval model'', where secret keys are made intentionally huge to make them hard to exfiltrate. Using incompressible encodings, we can get all the security benefits of a big key without wasting storage space, by having the key to encode useful data.
The recent work of Damg\aa{}rd, Ganesh and Orlandi (CRYPTO '19) defined a variant of incompressible encodings as a building block for ``proofs of replicated storage''. They constructed incompressible encodings in an ideal permutation model, but it was left open if they can be constructed under standard assumptions, or even in the more basic random-oracle model. In this work, we undertake the comprehensive study of incompressible encodings as a primitive of independent interest and give new constructions, negative results and applications:
* We construct incompressible encodings in the common random string (CRS) model under either Decisional Composite Residuosity (DCR) or Learning with Errors (LWE). However, the construction has several drawbacks: (1) it is not composable, (2) it only achieves selective security, and (3) the CRS is as long as the data $m$. * We leverage the above construction to also get a scheme in the random-oracle model, under the same assumptions, that avoids all of the above drawbacks. Furthermore, it is significantly more efficient than the prior ideal-model construction. * We give black-box separations, showing that incompressible encodings in the plain model cannot be proven secure under any standard hardness assumption, and incompressible encodings in the CRS model must inherently suffer from all of the drawbacks above. * We give a new application to ``big-key cryptography in the bounded-retrieval model'', where secret keys are made intentionally huge to make them hard to exfiltrate. Using incompressible encodings, we can get all the security benefits of a big key without wasting storage space, by having the key to encode useful data.