International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

10 May 2018

Martin R. Albrecht, Christian Hanser, Andrea Hoeller, Thomas Pöppelmann, Fernando Virdia, Andreas Wallner
ePrint Report ePrint Report
We repurpose existing RSA/ECC co-processors for (ideal) lattice-based cryptography by exploiting the availability of fast long integer multiplication. Such co-processors are deployed in smart cards in passports and identity cards, secured microcontrollers and hardware security modules (HSM). In particular, we demonstrate an implementation of a variant of the Module-LWE-based Kyber Key Encapsulation Mechanism (KEM) that is tailored for optimal performance on a commercially available smart card chip (SLE 78). To benefit from the RSA/ECC co-processor we use Kronecker substitution in combination with schoolbook and Karatsuba polynomial multiplication. Moreover, we speed-up symmetric operations in our Kyber variant using the AES co-processor to implement a PRNG and a SHA-256 co-processor to realise hash functions. This allows us to execute CCA-secure Kyber768 key generation in 79.6 ms, encapsulation in 102.4 ms and decapsulation in 132.7 ms.
Expand
Lachlan J. Gunn, Ricardo Vieitez Parra, N. Asokan
ePrint Report ePrint Report
Deniable messaging protocols allow two parties to have `off-the-record' conversations without leaving any record that can convince external verifiers about what either of them said during the conversation. Recent events like WikiLeaks email dumps underscore the importance of deniable messaging to whistleblowers, politicians, dissidents and many others. Consequently, messaging protocols like Signal and OTR are expressly designed to provide deniability.

Many commodity devices today support hardware-assisted remote attestation which can be used to convince a remote verifier of some property locally observed on the device.

We show how an adversary can use remote attestation to undetectably break deniability in any deniable protocol (including messaging protocols) that provide an authenticated channel. We prove that our attack allows an adversary to convince skeptical verifiers and describe a concrete implementation of the attack against the Signal messaging protocol. We then show how attestation itself can be used to restore deniability by thwarting a realistic class of adversaries from mounting such attacks.

Hardware-based attestation changes the adversary model for deniable protocols, and its availability has now made it entirely practical for well-resourced attackers to break deniability, completely unbeknownst to the victim.
Expand
Kasper Green Larsen, Jesper Buus Nielsen
ePrint Report ePrint Report
An Oblivious RAM (ORAM) introduced by Goldreich and Ostrovsky [JACM'96] is a (possibly randomized) RAM, for which the memory access pattern reveals no information about the operations performed. The main performance metric of an ORAM is the bandwidth overhead, i.e., the multiplicative factor extra memory blocks that must be accessed to hide the operation sequence. In their seminal paper introducing the ORAM, Goldreich and Ostrovsky proved an amortized $\Omega(\lg n)$ bandwidth overhead lower bound for ORAMs with memory size $n$. Their lower bound is very strong in the sense that it applies to the ``offline'' setting in which the ORAM knows the entire sequence of operations ahead of time.

However, as pointed out by Boyle and Naor [ITCS'16] in the paper ``Is there an oblivious RAM lower bound?'', there are two caveats with the lower bound of Goldreich and Ostrovsky: (1) it only applies to ``balls in bins'' algorithms, i.e., algorithms where the ORAM may only shuffle blocks around and not apply any sophisticated encoding of the data, and (2), it only applies to statistically secure constructions. Boyle and Naor showed that removing the ``balls in bins'' assumption would result in super linear lower bounds for sorting circuits, a long standing open problem in circuit complexity. As a way to circumventing this barrier, they also proposed a notion of an ``online'' ORAM, which is an ORAM that remains secure even if the operations arrive in an online manner. They argued that most known ORAM constructions work in the online setting as well.

Our contribution is an $\Omega(\lg n)$ lower bound on the bandwidth overhead of any online ORAM, even if we require only computational security and allow arbitrary representations of data, thus greatly strengthening the lower bound of Goldreich and Ostrovsky in the online setting. Our lower bound applies to ORAMs with memory size $n$ and any word size $r \geq 1$. The bound therefore asymptotically matches the known upper bounds when $r = \Omega(\lg^2 n)$.
Expand
Suyash Kandele, Souradyuti Paul
ePrint Report ePrint Report
Message-locked encryption (MLE) (formalized by Bellare, Keelveedhi and Ristenpart, 2013) is an important cryptographic primitive that supports deduplication in the cloud. Updatable block-level message-locked encryption (UMLE) (formalized by Zhao and Chow, 2017) adds the update functionality to the MLE. In this paper, we formalize and extensively study a new cryptographic primitive file-updatable message-locked encryption (FMLE). FMLE can be viewed as a generalization of the UMLE, in the sense that unlike the latter, the former does not require the existence of BL-MLE (block-level message-locked encryption). FMLE allows more flexibility and efficient methods for updating the ciphertext and tag. Our second contribution is the design of two efficient FMLE constructions, namely, RevD-1 and RevD-2, whose design principles are inspired from the very unique reverse decryption functionality of the FP hash function (designed by Paul, Homsirikamol and Gaj, 2012) and the APE authenticated encryption (designed by Andreeva et al., 2014). With respect to UMLE – which provides so far the most efficient update function – RevD-1 and RevD-2 reduce the total update time by at least 50%, on average. Additionally, our constructions are storage efficient. We also give extensive comparison between our and the existing constructions.
Expand
Ilaria Chillotti, Nicolas Gama, Mariya Georgieva, Malika Izabachène
ePrint Report ePrint Report
This work describes a fast fully homomorphic encryption scheme over the torus (TFHE), that revisits, generalizes and improves the fully homomorphic encryption (FHE) based on GSW and its ring variants. The simplest FHE schemes consist in bootstrapped binary gates. In this gate bootstrapping mode, we show that the scheme FHEW of [24] can be expressed only in terms of external product between a GSW and a LWE ciphertext. As a consequence of this result and of other optimizations, we decrease the running time of their bootstrapping from 690ms to 13ms single core, using 16MB bootstrapping key instead of 1GB, and preserving the security parameter. In leveled homomorphic mode, we propose two methods to manipulate packed data, in order to decrease the ciphertext expansion and to optimize the evaluation of look-up tables and arbitrary functions in RingGSW based homomorphic schemes. We also extend the automata logic, introduced in [26], to the efficient leveled evaluation of weighted automata, and present a new homomorphic counter called TBSR, that supports all the elementary operations that occur in a multiplication. These improvements speed-up the evaluation of most arithmetic functions in a packed leveled mode, with a noise overhead that remains additive. We finally present a new circuit bootstrapping that converts LWE ciphertexts into low-noise RingGSW ciphertexts in just 137ms, which makes the leveled mode of TFHE composable, and which is fast enough to speed-up arithmetic functions, compared to the gate bootstrapping approach. Finally, we provide an alternative practical analysis of LWE based schemes, which directly relates the security parameter to the error rate of LWE and the entropy of the LWE secret key, and we propose concrete parameter sets and timing comparison for all our constructions.
Expand
Shuichi Katsumata, Takahiro Matsuda, Atsushi Takayasu
ePrint Report ePrint Report
Revocable identity-based encryption (RIBE) is an extension of IBE that supports a key revocation mechanism; an indispensable feature for practical cryptographic schemes. Due to this extra feature, RIBE is often required to satisfy a strong security notion unique to the revocation setting called decryption key exposure resistance (DKER). Additionally, hierarchal IBE (HIBE) is another orthogonal extension of IBE that supports key delegation functionalities allowing for scalable deployments of cryptographic schemes. Thus far, R(H)IBE constructions with DKER are only known from bilinear maps, where all constructions rely heavily on the so-called key re-randomization property to achieve the DKER and/or hierarchal feature. Since lattice-based schemes seem to be inherently ill-fit with the key re-randomization property, we currently do not know of any lattice-based R(H)IBE schemes with DKER.

In this paper, we propose the first lattice-based RHIBE scheme with DKER without relying on the key re-randomization property, departing from all the previously known methods. We start our work by providing a generic construction of RIBE schemes with DKER, which uses as building blocks any two-level standard HIBE scheme and (weak) RIBE scheme without DKER. Based on previous lattice-based RIBE constructions, our result implies the first lattice-based RIBE scheme with DKER. Then, building on top of our generic construction, we construct the first lattice-based RHIBE scheme with DKER, by further exploiting the algebraic structure of lattices. To this end, we prepare a new tool called the level conversion keys, which allows us to achieve the hierarchal feature without relying on the key re-randomization property.
Expand
Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Michele Orrù
ePrint Report ePrint Report
We continue the study of Homomorphic Secret Sharing (HSS), recently introduced by Boyle et al. (Crypto 2016, Eurocrypt 2017). A (2-party) HSS scheme splits an input x into shares (x0, x1) such that (1) each share computationally hides x, and (2) there exists an efficient homomorphic evaluation algorithm Eval such that for any function (or “program”) P from a given class it holds that Eval(x0,P)+Eval(x1,P) = P(x). Boyle et al. show how to construct an HSS scheme for branching programs, with an inverse polynomial error, using discrete-log type assumptions such as DDH.

We make two types of contributions.

Optimizations. We introduce new optimizations that speed up the previous optimized implementation of Boyle et al. by more than a factor of 30, significantly reduce the share size, and reduce the rate of leakage induced by selective failure.

Applications. Our optimizations are motivated by the observation that there are natural application scenarios in which HSS is useful even when applied to simple computations on short inputs. We demonstrate the practical feasibility of our HSS implementation in the context of such applications.
Expand
Vladimir Kiriansky, Ilia Lebedev, Saman Amarasinghe, Srinivas Devadas, Joel Emer
ePrint Report ePrint Report
Software side channel attacks have become a serious concern with the recent rash of attacks on speculative processor architectures. Most attacks that have been demonstrated exploit the cache tag state as their exfiltration channel. While many existing defense mechanisms that can be implemented solely in software have been proposed, these mechanisms appear to patch specific attacks, and can be circumvented. In this paper, we propose minimal modifications to hardware to defend against a broad class of attacks, including those based on speculation, with the goal of eliminating the entire attack surface associated with the cache state covert channel.

We propose DAWG, Dynamically Allocated Way Guard, a generic mechanism for secure way partitioning of set associative structures including memory caches. DAWG endows a set associative structure with a notion of protection domains to provide strong isolation. When applied to a cache, unlike existing quality of service mechanisms such as Intel's Cache Allocation Technology (CAT), DAWG isolates hits and metadata updates across protection domains. We describe how DAWG can be implemented on a processor with minimal modifications to modern operating systems. We argue a non-interference property that is orthogonal to speculative execution and therefore argue that existing attacks such as Spectre Variant 1 and 2 will not work on a system equipped with DAWG. Finally, we evaluate the performance impact of DAWG on the cache subsystem.
Expand
Manu Drijvers, Kasra Edalatnejad, Bryan Ford, Gregory Neven
ePrint Report ePrint Report
A multisignature scheme allows a group of signers to collaboratively sign a message, creating a single signature that convinces a verifier that every individual signer approved the message. The increased interest in technologies to decentralize trust has triggered the proposal of two highly efficient Schnorr-based multisignature schemes designed to scale up to thousands of signers, namely CoSi by Syta et al. (S&P 2016) and MuSig by Maxwell et al. (ePrint 2018). The MuSig scheme was presented with a proof under the one-more discrete-logarithm assumption, while the provable security of CoSi has so far remained an open question. In this work, we prove that CoSi and MuSig cannot be proved secure without radically departing from currently known techniques (and point out a flaw in the proof of MuSig). We then present DG-CoSi, a double-generator variant of CoSi based on the Okamoto (multi)signature scheme, and prove it secure under the discrete-logarithm assumption in the random-oracle model. Our experiments show that the second generator in DG-CoSi barely affects scalability compared to CoSi, allowing 8192 signers to collaboratively sign a message in under 1.5 seconds, making it a highly practical and provably secure alternative for large-scale deployments.
Expand
Nadim Kobeissi, Natalia Kulatova
ePrint Report ePrint Report
Cryptocurrencies have popularized public ledgers, known colloquially as "blockchains". While the Bitcoin blockchain is relatively simple to reason about as, effectively, a hash chain, more complex public ledgers are largely designed without any formalization of desired cryptographic properties such as authentication or integrity. These designs are then implemented without assurances against real-world bugs leading to little assurance with regards to practical, real-world security.

Ledger Design Language (LDL) is a modeling language for describing public ledgers. The LDL compiler produces two outputs. The first output is a an applied-pi calculus symbolic model representing the public ledger as a protocol. Using ProVerif, the protocol can be played against an active attacker, whereupon we can query for block integrity, authenticity and other properties. The second output is a formally verified read/write API for interacting with the public ledger in the real world, written in the F* programming language. F* features such as dependent types allow us to validate a block on the public ledger, for example, by type-checking it so that its signing public key be a point on a curve. Using LDL's outputs, public ledger designers obtain automated assurances on the theoretical coherence and the real-world security of their designs with a single framework based on a single modeling language.
Expand
Alexei Zamyatin, Nicholas Stifter, Philipp Schindler, Edgar Weippl, William J. Knottenbelt
ePrint Report ePrint Report
The term near or weak blocks describes Bitcoin blocks whose PoW does not meet the required target difficulty to be considered valid under the regular consensus rules of the protocol. Near blocks are generally associated with protocol improvement proposals striving towards shorter transaction confirmation times. Existing proposals assume miners will act rationally based solely on intrinsic incentives arising from the adoption of these changes, such as earlier detection of blockchain forks.

In this paper we present Flux, a protocol extension for proof-of-work blockchains that leverages on near blocks, a new block reward distribution mechanism, and an improved branch selection policy to incentivize honest participation of miners. Our protocol reduces mining variance, improves the responsiveness of the underlying blockchain in terms of transaction processing, and can be deployed without conflicting modifications to the underlying base protocol as a velvet fork. We perform an initial analysis of selfish mining which suggests Flux not only provides security guarantees similar to pure Nakamoto consensus, but potentially renders selfish mining strategies less profitable.
Expand
Yunlei Zhao
ePrint Report ePrint Report
Aggregate signature allows non-interactively condensing multiple individual signatures into a compact one. Besides the faster verification, it is useful to reduce storage and bandwidth, and is especially attractive for blockchain and cryptocurrency. For example, aggregate signature can mitigate some bottlenecks emerged with the Bitcoin systems (and actually almost all blockchain-based systems): low throughput, capacity, scalability, high transaction fee, etc. Unfortunately, achieving aggregate signature from general elliptic curve group (without bilinear maps) is a long-standing open question. Recently, there is also renewed interest in deploying Schnorr's signature in Bitcoin, for its efficiency and flexibility. In this work, we investigate the applicability of the Gamma-signature scheme proposed by Yao and Zhao. Akin to Schnorr's, Gamma-signature is generated with linear combination of ephemeral secret-key and static secret-key, and enjoys almost all the advantages of Schnorr's signature. Besides, Gamma-signature has salient features in online/offline performance, stronger provable security, and deployment flexibility with interactive protocols like IKE. In this work, we identify one more key advantage of Gamma-signature in signature aggregation, which is particularly crucial for applications to blockchain and cryptocurrency. Specifically, we first observe the incapability of Schnorr's for aggregating signatures in the Bitcoin system. This is demonstrated by concrete attacks. Then, we show that aggregate signature can be derived from the Gamma-signature scheme. To the best of our knowledge, this is the first aggregate signature scheme from general groups without bilinear maps. The security of aggregate Gamma-signature is proved based on a new assumption proposed and justified in this work, referred to as non-malleable discrete-logarithm (NMDL), which might be of independent interest and could find more cryptographic applications in the future. When applying the resultant aggregate Gamma-signature to Bitcoin, the storage volume of signatures reduces about 50%, and the signature verification time can even reduce about 80%. Finally, we specify in detail the implementation of aggregate Gamma-signature in Bitcoin, with minimal modifications that are in turn more friendly to segregated witness (SegWit) and provide better protection against transaction malleability attacks.
Expand
Kevin Lewi, Callen Rain, Stephen Weis, Yueting Lee, Haozhi Xiong, Benjamin Yang
ePrint Report ePrint Report
Secure authentication and authorization within Facebook’s infrastructure play important roles in protecting people using Facebook’s services. Enforcing security while maintaining a flexible and performant infrastructure can be challenging at Facebook’s scale, especially in the presence of varying layers of trust among our servers. Providing authentication and encryption on a per-connection basis is certainly necessary, but also insufficient for securing more complex flows involving multiple services or intermediaries at lower levels of trust.

To handle these more complicated scenarios, we have developed two token-based mechanisms for authentication. The first type is based on certificates and allows for flexible verification due to its public-key nature. The second type, known as “crypto auth tokens”, is symmetric-key based, and hence more restrictive, but also much more scalable to a high volume of requests. Crypto auth tokens rely on pseudorandom functions to generate independently-distributed keys for distinct identities.

Finally, we provide (mock) examples which illustrate how both of our token primitives can be used to authenticate real-world flows within our infrastructure, and how a token-based approach to authentication can be used to handle security more broadly in other infrastructures which have strict performance requirements and where relying on TLS alone is not enough.
Expand
Karl Wüst, Kari Kostiainen, Vedran Capkun, Srdjan Capkun
ePrint Report ePrint Report
Decentralized cryptocurrencies based on blockchains provide attractive features, including user privacy and system transparency, but lack active control of money supply and capabilities for regulatory oversight, both existing features of modern monetary systems. These limitations are critical, especially if the cryptocurrency is to replace, or complement, existing fiat currencies. Centralized cryptocurrencies, on the other hand, provide controlled supply of money, but lack transparency and transferability. Finally, they provide only limited privacy guarantees, as they do not offer recipient anonymity or payment value secrecy.

We propose a novel digital currency, called PRCash, where the control of money supply is centralized, money is represented as value-hiding transactions for transferability and improved privacy, and transactions are verified in a distributed manner and published to a public ledger for verifiability and transparency. Strong privacy and regulation are seemingly conflicting features, but we overcome this technical problem with a new regulation mechanism based on zero-knowledge proofs. Our implementation and evaluation shows that payments are fast and large-scale deployments practical. PRCash is the first digital currency to provide control of money supply, transparency, regulation, and privacy at the same time, and thus make its adoption as a fiat currency feasible.
Expand
Angela Jäschke, Frederik Armknecht
ePrint Report ePrint Report
In the context of Fully Homomorphic Encryption, which allows computations on encrypted data, Machine Learning has been one of the most popular applications in the recent past. All of these works, however, have focused on supervised learning, where there is a labeled training set that is used to configure the model. In this work, we take the first step into the realm of unsupervised learning, which is an important area in Machine Learning and has many real-world applications, by addressing the clustering problem. To this end, we show how to implement the K-Means-Algorithm. This algorithm poses several challenges in the FHE context, including a division, which we tackle by using a natural encoding that allows division and may be of independent interest. While this theoretically solves the problem, performance in practice is not optimal, so we then propose some changes to the clustering algorithm to make it executable under more conventional encodings. We show that our new algorithm achieves a clustering accuracy comparable to the original K-Means-Algorithm, but has less than $5\%$ of its runtime.
Expand
Zhengjun Cao, Lihua Liu
ePrint Report ePrint Report
Clauser-Horne-Shimony-Holt inequality, an extension of Bell's inequality, is of great importance to modern quantum computation and quantum cryptography. So far, all experimental demonstrations of entanglement are designed to check Bell's inequality or Clauser-Horne-Shimony-Holt inequality. In this note, we specify the math assumptions needed in the argument for Clauser-Horne-Shimony-Holt inequality. We then show the math argument for this inequality is totally indispensable of any physical interpretation, including the hidden variable interperation for EPR thought experiment and the Copenhagen interpretation for quantum mechanics.
Expand
Willy Quach, Hoeteck Wee, Daniel Wichs
ePrint Report ePrint Report
We introduce a new cryptographic primitive called laconic function evaluation (LFE). Using LFE, Alice can compress a large circuit $f$ into a small digest. Bob can encrypt some data $x$ under this digest in a way that enables Alice to recover $f(x)$ without learning anything else about Bob's data. For the scheme to be laconic, we require that the size of the digest, the run-time of the encryption algorithm and the size of the ciphertext should all be small, much smaller than the circuit-size of $f$. We construct an LFE scheme for general circuits under the learning with errors (LWE) assumption, where the above parameters only grow polynomially with the depth but not the size of the circuit. We then use LFE to construct secure 2-party and multi-party computation (2PC, MPC) protocols with novel properties:

* We construct a 2-round 2PC protocol between Alice and Bob with respective inputs $x_A,x_B$ in which Alice learns the output $f(x_A,x_B)$ in the second round. This is the first such protocol which is "Bob-optimized", meaning that Alice does all the work while Bob's computation and the total communication of the protocol are smaller than the size of the circuit $f$ or even Alice's input $x_A$. In contrast, prior solutions based on fully homomorphic encryption are "Alice-optimized".

* We construct an MPC protocol, which allows $N$ parties to securely evaluate a function $f(x_1,...,x_N)$ over their respective inputs, where the total amount of computation performed by the parties during the protocol execution is smaller than that of evaluating the function itself! Each party has to individually pre-process the circuit $f$ before the protocol starts and post-process the protocol transcript to recover the output after the protocol ends, and the cost of these steps is larger than the circuit size. However, this gives the first MPC where the computation performed by each party during the actual protocol execution, from the time the first protocol message is sent until the last protocol message is received, is smaller than the circuit size.
Expand
Jung Hee Cheon, Minki Hhan, Jiseung Kim, Changmin Lee
ePrint Report ePrint Report
In this paper, we propose cryptanalyses of all existing indistinguishability obfuscation (iO) candidates based on branching programs (BP) over GGH13 multilinear map. To achieve this, we introduce two novel techniques, program converting and matrix zeroizing, which can be applied to a wide range of obfuscation structures and BPs. We then prove that the existing general-purpose BP obfuscations over GGH13 multilinear map with the current parameters cannot achieve indistinguishability. More precisely, the recent BP obfuscation suggested by Garg et al. which is still secure against all known attack, and the rst candidate indistinguishability obfuscation with input-unpartitionable branching programs is not secure against our attack. Previously, there has been no known probabilistic polynomial time attack for these two cases.
Expand
Cencen Wan, Yuncong Zhang, Chen Pan, Zhiqiang Liu, Yu Long, Zhen Liu, Yu Yu, Shuyang Tang
ePrint Report ePrint Report
Proof of Work (PoW), a fundamental blockchain protocol, has been widely applied and thoroughly testified in various decentralized cryptocurrencies, due to its intriguing merits including trustworthy sustainability, robustness against sybil attack, delicate incentive-compatibility, and openness to any participant. Meanwhile, PoW-powered blockchains still suffer from poor efficiency, potential selfish mining, to-be-optimized fairness and extreme inconvenience of protocol upgrading. Therefore, it is of great interest to design new PoW-based blockchain protocol to address or relieve the above issues so as to make it more applicable and feasible.

To do so, firstly we take advantage of the basic framework (i.e., two-layer chain structure) adopted in Bitcoin-NG which was introduced by Eyal et al. to extend the throughput of Bitcoin-derived blockchains significantly via blocks of a two-layer structure, inheriting the high throughput merit while ridding off the vulnerability to the attack of microblock swamping in Bitcoin-NG as well as attaining a better fairness property, by presenting two-level mining mechanism and incorporating this mechanism into the two-layer chain structure. Furthermore, to tackle the selfish mining issue, strengthen the robustness against the "51%" attack of PoW miners, and offer the flexibility for future protocol updating effectively, we borrow the idea of ticket-voting mechanism from DASH and Decred, and combine it with our improved structure elaborately to build a novel efficient, robust and flexible blockchain protocol (named Goshawk). Last but not the least, this scheme has been implemented and deployed in the testnet of the public blockchain project Hcash for months, and has demonstrated its stability and high efficiency with such real-world test.
Expand
Gideon Samid
ePrint Report ePrint Report
Cryptographic security is built on two ingredients: a sufficiently large key space, and sufficiently complex processing algorithm. Driven by historic inertia we use fixed size small keys, and dial up the complexity metric in our algorithms. It's time to examine this trend. Effective cryptographic complexity is difficult to achieve, more difficult to verify, and it keeps the responsibility for security in the hands of a few cipher implementers and fewer cipher designers. By contrast, adding more key bits over simple-to-analyze mathematics may guarantee a security advantage per increased key size. What is more revolutionary is the fact that the decision how much randomness to deploy may be relegated to the owner of the protected data, (the cipher user) which is where it should reside. Such shift of security responsibility will deny government the ability to violate its citizens privacy on a wholesale basis. In order to catch on, we need a new class of ciphers. We point to several published options, and invite a community debate on this strategic proposition.
Expand
◄ Previous Next ►