IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
26 February 2019
Jung Hee Cheon, Wonhee Cho, Minki Hhan, Minsik Kang, Jiseung Kim, Changmin Lee
The approximate greatest common divisor problem (ACD) and its variants have been used to construct many cryptographic primitives. In particular, variants of the ACD problem based on Chinese remainder theorem (CRT) are exploited in the constructions of a batch fully homomorphic encryption to encrypt multiple messages in one ciphertext. Despite the utility of the CRT-variant scheme, the algorithms to solve its security foundation have not been studied well compared to the original ACD based scheme. In this paper, we propose two algorithms for solving the CCK-ACD problem, which is used to construct a batch fully homomorphic encryption over integers. To achieve the goal, we revisit the orthogonal lattice attack and simultaneous Diophantine approximation algorithm. Both two algorithms take the same time complexity $2^{\tilde{O}(\frac{\gamma}{(\eta-\rho)^2})}$ up to a polynomial factor to solve the CCK-ACD problem for the bit size of samples $\gamma$, secret primes $\eta$, and error bound $\rho$. Compared to Chen and Nguyen's algorithm in Eurocrypt' 12, which takes $\tilde{O}(2^{\rho/2})$ complexity, our algorithm gives the first parameter condition related to $\eta$ and $\gamma$ size. We also report the experimental results for our attack upon several parameters. From the results, we can see that our algorithms work well both in theoretical and experimental terms.
Thomas Vidick, Tina Zhang
We show that every language in BQP admits a classical-verifier, quantum-prover zero-knowledge argument system which is sound against quantum polynomial-time provers and zero-knowledge for classical (and quantum) polynomial-time verifiers. The protocol builds upon two recent results: a computational zero-knowledge proof system for languages in QMA, with a quantum verifier, introduced by Broadbent et al. (FOCS 2016), and an argument system for languages in BQP, with a classical verifier, introduced by Mahadev (FOCS 2018).
25 February 2019
Chun Guo, Olivier Pereira, Thomas Peters, François-Xavier Standaert
Authenticated Encryption (AE) has become the de facto standard for encryption in modern protocols, and the ubiquitous deployment of small connected devices naturally calls for the availability of lightweight AE schemes, as reflected by a new standardisation process initiated by the NIST. Small devices do not only have a limited computational power: they are also targets of choice for side-channel attacks, and the ease of protecting against these attacks is therefore an important aspect of the selection criteria in this standardisation process.
In this paper, we address this challenge by presenting a new AE mode, TETSponge, which carefully combines a tweakable block cipher with strong protections against side-channel attacks that only needs to be called twice per encryption and decryption, and a sponge-style permutation that only needs weak side-channel protections and is used to frugally process the message and associated data.
TETSponge offers many desirable features: (i) it supports single-pass encryption and decryption, and is compatible with limited memory requirements, (ii) it offers black-box security as an AE mode, with good bounds, including in the multi-user setting, (iii) it offers strong resistance against side-channel attacks during both encryption and decryption, and guarantees nonce misuse-resilience. As such, we conclude that TETSponge offers an appealing option for the implementation of lightweight AE in settings where side-channel attacks are an actual concern.
Along our way, we propose the first rigorous methodology that can be used to analyze the leakage-resilience of sponge-based constructions.
In this paper, we address this challenge by presenting a new AE mode, TETSponge, which carefully combines a tweakable block cipher with strong protections against side-channel attacks that only needs to be called twice per encryption and decryption, and a sponge-style permutation that only needs weak side-channel protections and is used to frugally process the message and associated data.
TETSponge offers many desirable features: (i) it supports single-pass encryption and decryption, and is compatible with limited memory requirements, (ii) it offers black-box security as an AE mode, with good bounds, including in the multi-user setting, (iii) it offers strong resistance against side-channel attacks during both encryption and decryption, and guarantees nonce misuse-resilience. As such, we conclude that TETSponge offers an appealing option for the implementation of lightweight AE in settings where side-channel attacks are an actual concern.
Along our way, we propose the first rigorous methodology that can be used to analyze the leakage-resilience of sponge-based constructions.
Francisco Corella, Karen Lewison
A cryptographic checksum is a small data item that is computed from a data structure and can be used to prevent undetected intentional modification by making it difficult for an adversary to construct a different data structure that has the same checksum. We broaden the concept of a checksum to include the concept of a data item intended to prevent certain modifications while tolerating others. An omission-tolerant checksum is computed from a data structure that represents a set and does not change when elements are omitted from the set, while making it difficult for an adversary to modify the set in any other way without invalidating the checksum.
We use the root label of a typed hash tree to implement an omission-tolerant checksum. A typed hash tree is a variation on a Merkle tree, where each node has a type and a label. The root label of a typed hash tree does not change when a subtree is pruned from the tree. The same is true for a Merkle tree, but in a Merkle tree this a ``bug'' to be mitigated, while in a typed has tree it is a ``feature'' that makes it possible to use the label of the root node as an omission-tolerant checksum for a set of key-value pairs. To do so, we encode each key-value pair as the type and label of a leaf node of an incomplete typed hash tree without internal-node labels, then serialize the tree to obtain a bit-string encoding of the set. To compute the checksum on the encoding we deserialize the bit-string, compute the missing internal nodes, and output the label of the root node as the checksum.
We use Boneh and Shoup's system parameterization and attack game methodology to prove that, given a set of key-value pairs, an efficient adversary has a negligible probability of producing a set of key-value pairs, other than a subset of the given one, that has the same checksum.
We use the root label of a typed hash tree to implement an omission-tolerant checksum. A typed hash tree is a variation on a Merkle tree, where each node has a type and a label. The root label of a typed hash tree does not change when a subtree is pruned from the tree. The same is true for a Merkle tree, but in a Merkle tree this a ``bug'' to be mitigated, while in a typed has tree it is a ``feature'' that makes it possible to use the label of the root node as an omission-tolerant checksum for a set of key-value pairs. To do so, we encode each key-value pair as the type and label of a leaf node of an incomplete typed hash tree without internal-node labels, then serialize the tree to obtain a bit-string encoding of the set. To compute the checksum on the encoding we deserialize the bit-string, compute the missing internal nodes, and output the label of the root node as the checksum.
We use Boneh and Shoup's system parameterization and attack game methodology to prove that, given a set of key-value pairs, an efficient adversary has a negligible probability of producing a set of key-value pairs, other than a subset of the given one, that has the same checksum.
Benedikt Bünz, Shashank Agrawal, Mahdi Zamani, Dan Boneh
Blockchain-based smart contract platforms like Ethereum have become quite popular as a way to remove trust and add transparency to distributed applications. While different types of important applications can be easily built on such platforms, there does not seem to be an easy way to add a meaningful level of privacy to them.
In this paper, we propose Zether, a fully-decentralized, confidential payment mechanism that is compatible with Ethereum and other smart contract platforms. We take an account-based approach similar to Ethereum for efficiency and usability. We design a new smart contract that keeps the account balances encrypted and exposes methods to deposit, transfer and withdraw funds to/from accounts through cryptographic proofs.
We describe techniques to protect Zether against replay attacks and front-running situations. We also develop a mechanism to enable interoperability with arbitrary smart contracts. This helps to make several popular applications like auctions, payment channels, voting, etc. confidential.
As a part of our protocol, we propose $\Sigma$-Bullets, an improvement of the existing zero-knowledge proof system, Bulletproofs. $\Sigma$-Bullets make Bulletproofs more inter-operable with Sigma protocols, which is of general interest.
We implement Zether as an Ethereum smart contract and show the practicality of our design by measuring the amount of gas used by the Zether contract. A Zether confidential transaction costs about 0.014 ETH or approximately $1.51 (as of early Feb, 2019). We discuss how small changes to Ethereum, which are already being discussed independently of Zether, would drastically reduce this cost.
Jelle Don, Serge Fehr, Christian Majenz, Christian Schaffner
The famous Fiat-Shamir transformation turns any public-coin three-round interactive proof, i.e., any so-called sigma-protocol, into a non-interactive proof in the random-oracle model.
We study this transformation in the setting of a quantum adversary that in particular may query the random oracle in quantum superposition.
Our main result is a generic reduction that transforms any quantum dishonest prover attacking the Fiat-Shamir transformation in the quantum random-oracle model into a similarly successful quantum dishonest prover attacking the underlying sigma-protocol (in the standard model). Applied to the standard soundness and proof-of-knowledge definitions, our reduction implies that both these security properties, in both the computational and the statistical variant, are preserved under the Fiat-Shamir transformation even when allowing quantum attacks. Our result improves and completes the partial results that have been known so far, but it also proves wrong certain claims made in the literature.
In the context of post-quantum secure signature schemes, our results imply that for any sigma-protocol that is a proof-of-knowledge against quantum dishonest provers (and that satisfies some additional natural properties), the corresponding Fiat-Shamir signature scheme is secure in the quantum random-oracle model. For example, we can conclude that the non-optimized version of Fish, which is the bare Fiat-Shamir variant of the NIST candidate Picnic, is secure in the quantum random-oracle model.
Our main result is a generic reduction that transforms any quantum dishonest prover attacking the Fiat-Shamir transformation in the quantum random-oracle model into a similarly successful quantum dishonest prover attacking the underlying sigma-protocol (in the standard model). Applied to the standard soundness and proof-of-knowledge definitions, our reduction implies that both these security properties, in both the computational and the statistical variant, are preserved under the Fiat-Shamir transformation even when allowing quantum attacks. Our result improves and completes the partial results that have been known so far, but it also proves wrong certain claims made in the literature.
In the context of post-quantum secure signature schemes, our results imply that for any sigma-protocol that is a proof-of-knowledge against quantum dishonest provers (and that satisfies some additional natural properties), the corresponding Fiat-Shamir signature scheme is secure in the quantum random-oracle model. For example, we can conclude that the non-optimized version of Fish, which is the bare Fiat-Shamir variant of the NIST candidate Picnic, is secure in the quantum random-oracle model.
Yaoling Ding, An Wang, Siu Ming YIU
Correlation power analysis (CPA) is widely used in side-channel attacks on cryptographic devices. Its efficiency mostly depends on the noise produced by the devices. For parallel implementations, the power consumption during the S-box operation contains information of the whole intermediate state. When one S-box is analyzed by CPA, the others are regarded as noise. Apparently, the information of the remained S-boxes not only is wasted, but also increases the complexity of analysis. If two or more S-boxes are considered simultaneously, the complexity of exhaustive search on the corresponding key words grows exponentially. An optimal solution is to process all the S-boxes simultaneously and avoid traversing the whole candidate key space. Simple genetic algorithm was used by Zhang et al. to achieve this purpose. While, premature convergence causes failure in recovering the whole key, especially when plenty large S-boxes are employed in the target primitive, such as AES.
In this paper, we study the reason of premature convergence, and propose the multiple sieve method which overcomes it and reduces the number of traces required in correlation power attacks. Operators and the corresponding parameters are chosen experimentally with respect to a parallel implementation of AES-128. Simulation experimental results show that our method reduces the number of traces by $63.7\%$ and $30.77\%$ compared to classic CPA and the simple genetic algorithm based CPA (SGA-CPA) respectively when the success rate is fixed to $90\%$. Real experiments performed on SAKURA-G confirm that the number of traces required to recover the correct key in our method is almost equal to the minimum number that makes the correlation coefficients of correct keys outstanding from the wrong ones, and is much less than classic CPA and SGA-CPA.
Dan Boneh, Elette Boyle, Henry Corrigan-Gibbs, Niv Gilboa, Yuval Ishai
We introduce and study the notion of fully linear probabilistically checkable proof systems. In such a proof system, the verifier can make a small number of linear queries that apply jointly to the input and a proof vector.
Our new type of proof system is motivated by applications in which the input statement is not fully available to any single verifier, but can still be efficiently accessed via linear queries. This situation arises in scenarios where the input is partitioned or secret-shared between two or more parties, or alternatively is encoded using an additively homomorphic encryption or commitment scheme. This setting appears in the context of secure messaging platforms, verifiable outsourced computation, PIR writing, private computation of aggregate statistics, and secure multiparty computation (MPC). In all these applications, there is a need for fully linear proof systems with short proofs.
While several efficient constructions of fully linear proof systems are implicit in the interactive proofs literature, many questions about their complexity are open. We present several new constructions of fully linear zero-knowledge proof systems with sublinear proof size for "simple" or "structured" languages. For example, in the non-interactive setting of fully linear PCPs, we show how to prove that an input vector $x\in\mathbb{F}^n$ satisfies a single degree-2 equation with a proof of size $O(\sqrt n)$ and $O(\sqrt n)$ linear queries, which we show to be optimal. More generally, for languages that can be recognized by systems of constant-degree equations, we can reduce the proof size to $O(\log n)$ at the cost of $O(\log n)$ rounds of interaction.
We use our new proof systems to construct new short zero-knowledge proofs on distributed and secret-shared data. These proofs can be used to improve the performance of many of the example systems mentioned above.
Finally, we observe that zero-knowledge proofs on distributed data provide a general-purpose tool for protecting protocols for secure multiparty computation (MPC) against malicious parties. Applying our short fully linear PCPs to "natural" MPC protocols in the honest-majority setting, we can achieve unconditional protection against malicious parties with sublinear additive communication cost. We use this to improve the communication complexity of recent honest-majority MPC protocols. For instance, using any pseudorandom generator, we obtain a 3-party protocol for Boolean circuits in which the amortized communication cost is only one bit per AND gate per party (compared to 10 bits in the best previous protocol), matching the best known protocols for semi-honest adversaries.
Our new type of proof system is motivated by applications in which the input statement is not fully available to any single verifier, but can still be efficiently accessed via linear queries. This situation arises in scenarios where the input is partitioned or secret-shared between two or more parties, or alternatively is encoded using an additively homomorphic encryption or commitment scheme. This setting appears in the context of secure messaging platforms, verifiable outsourced computation, PIR writing, private computation of aggregate statistics, and secure multiparty computation (MPC). In all these applications, there is a need for fully linear proof systems with short proofs.
While several efficient constructions of fully linear proof systems are implicit in the interactive proofs literature, many questions about their complexity are open. We present several new constructions of fully linear zero-knowledge proof systems with sublinear proof size for "simple" or "structured" languages. For example, in the non-interactive setting of fully linear PCPs, we show how to prove that an input vector $x\in\mathbb{F}^n$ satisfies a single degree-2 equation with a proof of size $O(\sqrt n)$ and $O(\sqrt n)$ linear queries, which we show to be optimal. More generally, for languages that can be recognized by systems of constant-degree equations, we can reduce the proof size to $O(\log n)$ at the cost of $O(\log n)$ rounds of interaction.
We use our new proof systems to construct new short zero-knowledge proofs on distributed and secret-shared data. These proofs can be used to improve the performance of many of the example systems mentioned above.
Finally, we observe that zero-knowledge proofs on distributed data provide a general-purpose tool for protecting protocols for secure multiparty computation (MPC) against malicious parties. Applying our short fully linear PCPs to "natural" MPC protocols in the honest-majority setting, we can achieve unconditional protection against malicious parties with sublinear additive communication cost. We use this to improve the communication complexity of recent honest-majority MPC protocols. For instance, using any pseudorandom generator, we obtain a 3-party protocol for Boolean circuits in which the amortized communication cost is only one bit per AND gate per party (compared to 10 bits in the best previous protocol), matching the best known protocols for semi-honest adversaries.
Antoine Joux
In this paper, we recast state-of-the-art constructions for fully
homomorphic encryption in the simple language of arithmetic modulo
large Fermat numbers. The techniques used to construct our scheme
are quite standard in the realm of (R)LWE based
cryptosystems. However, the use of arithmetic in such a simple ring
greatly simplifies exposition of the scheme and makes its
implementation much easier.
In terms of performance, our test implementation of the proposed scheme is slower than the current speed records but remains within a comparable range. We hope that the detailed study of our simplified scheme by the community can make it competitive and provide new insights into FHE constructions at large.
In terms of performance, our test implementation of the proposed scheme is slower than the current speed records but remains within a comparable range. We hope that the detailed study of our simplified scheme by the community can make it competitive and provide new insights into FHE constructions at large.
Jiangshan Yu, Man Ho Allen Au, Paulo Esteves-Verissimo
We develop new foundations on transaction untraceability for CryptoNote-style blockchain systems. In particular, we observe new attacks; develop theoretical foundations to model transaction untraceability; provide the least upper bound of transaction untraceability guarantee; provide ways to efficiently and automatically verify whether a given ledger achieves optimal transaction untraceability; and provide a general solution that achieves provably optimal transaction untraceability.
Unlike previous cascade effect attacks (ESORICS 17 and PETS 18) on CryptoNote-style transaction untraceability, we consider not only a passive attacker but also an active adaptive attacker. Our observed attacks allow both types of attacker to trace blockchain transactions that cannot be traced by using the existing attacks. We develop a series of new games, which we call The Sun-Tzu Survival Problem, to model CryptoNote-style blockchain transaction untraceability and our identified attacks. In addition, we obtain seven novel results, where three of them are negative and the rest are positive. In particular, thanks to our abstract game, we are able to build bipartite graphs to model transaction untraceability, and provide reductions to formally relate the hardness of calculating untraceability to the hardness of calculating the number of perfect matchings in all possible bipartite graphs. We prove that calculating transaction untraceability is a #P−complete problem, which is believed to be even more difficult to solve than NP problems. In addition, we provide the first result on the least upper bound of transaction untraceability. Moreover, through our theoretical results, we are able to provide ways to efficiently and automatically verify whether a given ledger achieves optimal transaction untraceability. Furthermore, we propose a simple strategy for CryptoNote-style blockchain systems to achieve optimal untraceability. We take Monero as a concrete example to demonstrate how to apply this strategy to optimise the untraceability guarantee provided by Monero.
Unlike previous cascade effect attacks (ESORICS 17 and PETS 18) on CryptoNote-style transaction untraceability, we consider not only a passive attacker but also an active adaptive attacker. Our observed attacks allow both types of attacker to trace blockchain transactions that cannot be traced by using the existing attacks. We develop a series of new games, which we call The Sun-Tzu Survival Problem, to model CryptoNote-style blockchain transaction untraceability and our identified attacks. In addition, we obtain seven novel results, where three of them are negative and the rest are positive. In particular, thanks to our abstract game, we are able to build bipartite graphs to model transaction untraceability, and provide reductions to formally relate the hardness of calculating untraceability to the hardness of calculating the number of perfect matchings in all possible bipartite graphs. We prove that calculating transaction untraceability is a #P−complete problem, which is believed to be even more difficult to solve than NP problems. In addition, we provide the first result on the least upper bound of transaction untraceability. Moreover, through our theoretical results, we are able to provide ways to efficiently and automatically verify whether a given ledger achieves optimal transaction untraceability. Furthermore, we propose a simple strategy for CryptoNote-style blockchain systems to achieve optimal untraceability. We take Monero as a concrete example to demonstrate how to apply this strategy to optimise the untraceability guarantee provided by Monero.
Ralph Ankele, Christoph Dobraunig, Jian Guo, Eran Lambooij, Gregor Leander, Yosuke Todo
The design and analysis of dedicated tweakable block ciphers is a quite recent and very active research field that provides an ongoing stream of new insights. For instance, results of Kranz, Leander, and Wiemer from FSE 2017 show that the addition of a tweak using a linear tweak schedule does not introduce new linear characteristics. In this paper, we consider --- to the best of our knowledge --- for the first time the effect of the tweak on zero-correlation linear cryptanalysis for ciphers that have a linear tweak schedule. It turns out that the tweak can often be used to get zero-correlation linear hulls covering more rounds compared to just searching zero-correlation linear hulls on the data-path of a cipher. Moreover, this also implies the existence of integral distinguishers on the same number of rounds. We have applied our technique on round reduced versions of QARMA, MANTIS, and Skinny. As a result, we can present --- to the best of our knowledge --- the best attack (with respect to number of rounds) on a round-reduced variant of QARMA.
William Diehl, Farnoud Farahmand, Abubakr Abdulgadir, Jens-Peter Kaps, Kris Gaj
Authenticated ciphers potentially provide resource savings and security improvements over the joint use of secret-key ciphers and message authentication codes. The CAESAR competition has aimed to choose the most suitable authenticated ciphers for several categories of applications, including a lightweight use case, for which the primary criteria are performance in resource-constrained devices, and ease of protection against side channel attacks (SCA). In March 2018, two of the candidates from this category, ACORN and Ascon, were selected as CAESAR contest finalists. In this research, we compare two SCA-resistant FPGA implementations of ACORN and Ascon, where one set of implementations has area consumption nearly equivalent to the defacto standard AES-GCM, and the other set has throughput (TP) close to that of AES-GCM. The results show that protected implementations of ACORN and Ascon, with area consumption less than but close to AES-GCM, have 23.3 and 2.5 times, respectively, the TP of AES-GCM. Likewise, implementations of ACORN and Ascon with TP greater than but close to AES-GCM, consume 18 percent and 74 percent of the area, respectively, of AES-GCM.
Katherine E. Stange
We provide several reductions of Ring-LWE problems to smaller Ring-LWE problems in the presence of samples of a restricted form (i.e. (a,b) such that a is restricted to a subring, or multiplicative coset of a subfield of one CRT factor). To create and exploit such restricted samples, we propose Ring-BKW, a version of the Blum-Kalai-Wasserman algorithm which respects the ring structure. It has several key advantages based on the ring structure, including smaller tables, reduced or eliminated back-substitution, and a new opportunity for parallelization. We focus on two-power cyclotomic Ring-LWE with parameters proposed for practical use, with the exception that many splitting types are considered. The orthogonality of the lattice for two-power cyclotomics is exploited. In general, higher residue degree is an advantage to attacks.
Prasanna Ravi, Sourav Sen Gupta, Anupam Chattopadhyay, Mustafa Khairallah, Zakaria Najm, Shivam Bhasin
In state-of-the-art design paradigm, time, space and power efficiency are considered the primary design constraints. Quite often, this approach adversely impacts the security of the overall system, especially when security is adopted as a countermeasure after some vulnerability is identified. In this position paper, we motivate the idea that security should also be considered as an architectural design constraint in addition to time, space and power. We show that security and efficiency objectives along the three design axes of time, space and power are in fact tightly coupled while identifying that security stands in direct contrast with them across all layers of architectural design. We attempt to prove our case utilizing a
proof-by-evidence approach wherein we refer to various works across literature that explicitly imply the eternal conflict between security and efficiency. Thus, security has to be treated as a design constraint from the very beginning. Additionally, we advocate a security-aware design flow starting from the choice of cryptographic primitives, protocols and system design.
Jesper Buus Nielsen, Mark Simkin
Threshold secret sharing allows a dealer to split a secret into $n$ shares such that any authorized subset of cardinality at least $t$ of those shares efficiently reveals the secret, while at the same time any unauthorized subset of cardinality less than $t$ contains no information about the secret.
Leakage-resilience requires that the secret remains hidden even if the unauthorized subset is given a bounded amount of additional leakage from every share.
In this work, we study leakage-resilient secret sharing schemes with information-theoretic security and prove a lower bound on the share size and the required amount randomness of any such scheme. We prove that for any leakage-resilient secret sharing scheme either the amount of randomness across all shares or the share size has to be linear in $n$. More concretely, for a secret sharing scheme with $p$-bit long shares, $\ell$-bit leakage per share, where $\widehat{t}$ shares uniquely define the remaining $n - \widehat{t}$ shares, it has to hold that $$ p \ge \frac{\ell (n - t)}{\widehat{t}}\ . $$
We use our lower bound to gain further insights into a question that was recently posed by Benhamouda et al. (CRYPTO'18), who ask to what extend existing regular secret sharing schemes already provide protection against leakage. The authors proved that Shamir's secret sharing is leakage-resilient for very large thresholds $t = n - \mathcal{O}(\log n)$ and conjectured that it is also $1$-bit leakage-resilient for any threshold that is a constant fraction of the total number of shares. We do not disprove their conjecture, but show that two mild and natural strengthenings thereof are false, thus concluding that their conjecture is essentially the best one could hope for.
The results described above only apply to secret sharing schemes with information-theoretic security, since the lower bound proof relies on an adversary that can enumerate all possible secret sharings and thus runs in time exponential in at least the share size $p$ and the full reconstruction threshold $\widehat{t}$. In addition, we present a conceptually simple and highly efficient attack for the specific case of $2$-out-of-$n$ Shamir secret sharing that only requires $1$ bit of leakage, has a running time of $\bigO{n}$ field operations, and could easily be run in practice by a computationally bounded adversary.
In this work, we study leakage-resilient secret sharing schemes with information-theoretic security and prove a lower bound on the share size and the required amount randomness of any such scheme. We prove that for any leakage-resilient secret sharing scheme either the amount of randomness across all shares or the share size has to be linear in $n$. More concretely, for a secret sharing scheme with $p$-bit long shares, $\ell$-bit leakage per share, where $\widehat{t}$ shares uniquely define the remaining $n - \widehat{t}$ shares, it has to hold that $$ p \ge \frac{\ell (n - t)}{\widehat{t}}\ . $$
We use our lower bound to gain further insights into a question that was recently posed by Benhamouda et al. (CRYPTO'18), who ask to what extend existing regular secret sharing schemes already provide protection against leakage. The authors proved that Shamir's secret sharing is leakage-resilient for very large thresholds $t = n - \mathcal{O}(\log n)$ and conjectured that it is also $1$-bit leakage-resilient for any threshold that is a constant fraction of the total number of shares. We do not disprove their conjecture, but show that two mild and natural strengthenings thereof are false, thus concluding that their conjecture is essentially the best one could hope for.
The results described above only apply to secret sharing schemes with information-theoretic security, since the lower bound proof relies on an adversary that can enumerate all possible secret sharings and thus runs in time exponential in at least the share size $p$ and the full reconstruction threshold $\widehat{t}$. In addition, we present a conceptually simple and highly efficient attack for the specific case of $2$-out-of-$n$ Shamir secret sharing that only requires $1$ bit of leakage, has a running time of $\bigO{n}$ field operations, and could easily be run in practice by a computationally bounded adversary.
David Wong
At Real World Crypto 2017, Joan Daemen won the Levchin Prize and announced that he believed permutation-based crypto was the future of symmetric cryptography. At the same conference Mike Hamburg introduced Strobe, a symmetric protocol framework capable of protecting sessions as well as building symmetric cryptographic primitives for the single cost of Joan Daemens permutation Keccak. The next year, at Real World Crypto 2018 Trevor Perrin came to talk about the Noise protocol framework, a modern TLS-like protocol with similar traits but with a focus on flexibility, offering many handshake patterns to choose from in order to authenticate peers of a connection in different ways. Disco is the natural merge of the two projects, creating a new protocol based solely on two unique primitives: Curve25519 and the Keccak permutation (or more correctly its wrapper Strobe). Experimental results show that a library based on Disco can be implemented on top of these two cryptographic primitives with only a thousand lines of code. This, while offering both a flexible way to encryption sessions and a complete cryptographic library for all of an applications needs.
Yue Guo, Rafael Pass, Elaine Shi
Murphy, Murky, Mopey, Moody, and Morose decide to write a paper together over the
Internet and submit it to the prestigious CRYPTO19 conference that has the most amazing
PC. They encounter a few problems. First, not everyone is online every day: some are lazy and
go skiing on Mondays; others cannot use git correctly and they are completely unaware that
they are losing messages. Second, a small subset of the co-authors may be secretly plotting to
disrupt the project (e.g., because they are writing a competing paper in stealth).
Suppose that each day, sufficiently many honest co-authors are online (and use git correctly);
moreover, suppose that messages checked into git on Monday can be correctly received by honest
and online co-authors on Tuesday or any future day. Can the honest co-authors successfully
finish the paper in a small number of days such that they make the CRYPTO deadline; and
perhaps importantly, can all the honest co-authors, including even those who are lazy and those
who sometimes use git incorrectly, agree on the final theorem?
Rohit Sinha, Sivanarayana Gaddam, Ranjit Kumaresan
In light of widespread misuse of personal data, we enable users to control the sharing and use of their data, even when offline, by binding that data to policies. A policy specifies the allowed function, conditions guarding the execution (based on the history of all prior computations on that data), and identities of the input providers and output recipients. For this level of control, we aim for a compute system that ensures policy compliance to the input providers, and fairness (i.e., either all or no party gets the output) to the output recipients, without requiring these parties to trust each other or the compute host.
Recently, trusted execution environments (TEEs), such as Intel SGX and Sanctum enclaves, are finding applications in outsourced computing on sensitive data. However, since TEEs are at the mercy of an untrusted host for storage and network communication, they are incapable of enforcing history-dependent policies or fairness. For instance, against a user's wish that only an aggregate function over her entire data is revealed, an adversarial host can repeatedly evaluate that aggregate function on different subsets of her dataset, and learn the individual records. The adversary may also collude and deliver the output only to a subset of the output recipients, thus violating fairness.
This paper presents LucidiTEE, the first system to enable multiple parties to jointly compute on large-scale private data, while guaranteeing that the aforementioned policies are enforced even when the input providers are offline, and guaranteeing fairness to all output recipients. To that end, LucidiTEE develops a set of novel protocols between a network of TEEs and a distributed, shared, append-only ledger. LucidiTEE uses the ledger only to enforce policies; it does not store inputs, outputs, or state on the ledger, nor does it duplicate execution amongst the participants, which allows it to scale to large data and large number of parties.
We demonstrate several policy-based applications including personal finance, federated machine learning, fair $n$-party information exchange, and private set intersection for medical records.
Recently, trusted execution environments (TEEs), such as Intel SGX and Sanctum enclaves, are finding applications in outsourced computing on sensitive data. However, since TEEs are at the mercy of an untrusted host for storage and network communication, they are incapable of enforcing history-dependent policies or fairness. For instance, against a user's wish that only an aggregate function over her entire data is revealed, an adversarial host can repeatedly evaluate that aggregate function on different subsets of her dataset, and learn the individual records. The adversary may also collude and deliver the output only to a subset of the output recipients, thus violating fairness.
This paper presents LucidiTEE, the first system to enable multiple parties to jointly compute on large-scale private data, while guaranteeing that the aforementioned policies are enforced even when the input providers are offline, and guaranteeing fairness to all output recipients. To that end, LucidiTEE develops a set of novel protocols between a network of TEEs and a distributed, shared, append-only ledger. LucidiTEE uses the ledger only to enforce policies; it does not store inputs, outputs, or state on the ledger, nor does it duplicate execution amongst the participants, which allows it to scale to large data and large number of parties.
We demonstrate several policy-based applications including personal finance, federated machine learning, fair $n$-party information exchange, and private set intersection for medical records.
E.V. Flynn, Yan Bo Ti
We study $(\ell,\ell)$-isogeny graphs of principally polarised supersingular abelian surfaces (PPSSAS).
The $(\ell,\ell)$-isogeny graph has cycles of small length that can be used to break the collision resistance assumption of the genus two isogeny hash function suggested by Takashima.
Algorithms for computing $(2,2)$-isogenies on the level of Jacobians and $(3,3)$-isogenies on the level of Kummers are used to develop a genus two version of the supersingular isogeny Diffie--Hellman protocol of Jao and de~Feo.
The genus two isogeny Diffie--Hellman protocol achieves the same level of security as SIDH but uses a prime with a third of the bit length.
Nicholas Genise, Craig Gentry, Shai Halevi, Baiyu Li, Daniele Micciancio
We describe a somewhat homomorphic GSW-like encryption scheme, natively
encrypting matrices rather than just single elements. This scheme offers
much better performance than existing homomorphic encryption schemes
for evaluating encrypted (nondeterministic) finite automata (NFAs).
Differently from GSW, we do not know how to reduce the security of this
scheme to LWE, instead we reduce it to a stronger assumption, that can
be thought of as an inhomogeneous variant of the NTRU assumption.
This assumption (that we term iNTRU) may be useful and interesting in
its own right, and we examine a few of its properties.
We also examine methods to encode regular expressions as NFAs, and
in particular explore a new optimization problem, motivated by our
application to encrypted NFA evaluation. In this problem, we seek to
minimize the number of states in an NFA for a given expression, subject
to the constraint on the ambiguity of the NFA.