International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

02 March 2022

Charles Momin, Gaëtan Cassiers, François-Xavier Standaert
ePrint Report ePrint Report
Masking is an important countermeasure against side-channel attacks, but its secure implementation is known to be error-prone. The automated verification and generation of masked designs is therefore an important theoretical and practical challenge. In a recent work, Knichel et al. proposed a tool for the automated generation of masked hardware implementations satisfying strong security properties (e.g., glitch-freeness and composability). In this paper, we study the possibility to improve their results based on manual performance optimizations for the AES algorithm. Our main conclusion is that as the target architecture becomes more serial, such a handcrafted approach gains interest. For example, we reach latency reductions by a factor six for 8-bit architectures. We conclude the paper by discussing the extent to which such optimizations could be integrated in the tool of Knichel et al. As a bonus, we adapt a composition-based verification tool to check that our implementations are robust against glitches & transitions, and confirm the security order of exemplary implementations with preliminary leakage assessment.
Expand
Joël Alwen, Benedikt Auerbach, Miguel Cueto Noval, Karen Klein, Guillermo Pascual-Perez, Krzysztof Pietrzak, Michael Walter
ePrint Report ePrint Report
Messaging platforms like Signal are widely deployed and provide strong security in an asynchronous setting. It is a challenging problem to construct a protocol with similar security guarantees that can efficiently scale to large groups. A major bottleneck are the frequent key rotations users need to perform to achieve post compromise forward security.

In current proposals - most notably in TreeKEM (which is part of the IETF's Messaging Layer Security (MLS) protocol draft) - for users in a group of size $n$ to rotate their keys they must each craft a message of size $\log(n)$ to be broadcast to the group using an (untrusted) delivery server.

In larger groups, having users sequentially rotate their keys requires too much bandwidth (or takes too long), so variants allowing any $T \leq n$ users to simultaneously rotate their keys in just $2$ communication rounds have been suggested (e.g. "Propose and Commit" by MLS). Unfortunately, $2$-round concurrent updates are either damaging or expensive (or both); i.e. they either result in future operations being more costly (e.g. via "blanking'' or "tainting'') or are costly themselves requiring $\Omega(T)$ communication for each user [Bienstock et al., TCC'20].

In this paper we propose CoCoA; a new scheme that allows for $T$ concurrent updates that are neither damaging nor costly. That is, they add no cost to future operations yet they only require $\Omega(\log^2(n))$ communication per user. To circumvent the [Bienstock et al.] lower bound CoCoA increases the number of rounds needed to complete all updates from $2$ up to (at most) $\log(n)$; though typically fewer rounds are needed.

The key insight of our protocol is the following: in the (non-concurrent version of) TreeKEM, a delivery server which gets $T$ concurrent update requests will approve one and reject the remaining $T-1$. In contrast, our server attempts to apply all of them. If more than one user requests to rotate the same key during a round, the server arbitrarily picks a winner. Surprisingly, we prove that regardless of how the server chooses the winners, all previously compromised users will recover after at most $\log(n)$ such update rounds.

To keep the communication complexity low, CoCoA is a server-aided CGKA. That is the delivery server no longer blindly forwards packets, but instead actively computes individualized packets tailored to each user. As the server is untrusted, this change requires us to develop new mechanisms ensuring robustness of the protocol.
Expand
Vipul Goyal, Yuval Ishai, Yifan Song
ePrint Report ePrint Report
A $t$-private circuit for a function $f$ is a randomized Boolean circuit $C$ that maps a randomized encoding of an input $x$ to an encoding of the output $f(x)$, such that probing $t$ wires anywhere in $C$ reveals nothing about $x$. Private circuits can be used to protect embedded devices against side-channel attacks. Motivated by the high cost of generating fresh randomness in such devices, several works have studied the question of minimizing the randomness complexity of private circuits.

The best known upper bound, due to Coron et al. (Eurocrypt 2020), is $O(t^2\cdot\log ts)$ random bits, where $s$ is the circuit size of $f$. We improve this to $O(t\cdot \log ts)$, including the randomness used by the input encoder, and extend this bound to the stateful variant of private circuits. Our constructions are semi-explicit in the sense that there is an efficient randomized algorithm that generates the private circuit $C$ from a circuit for $f$ with negligible failure probability.
Expand
Aldo Gunsing, Bart Mennink
ePrint Report ePrint Report
A well-established PRP-to-PRF conversion design is truncation: one evaluates an $n$-bit pseudorandom permutation on a certain input, and truncates the result to $a$ bits. The construction is known to achieve tight $2^{n-a/2}$ security. Truncation has gained popularity due to its appearance in the GCM-SIV key derivation function (ACM CCS 2015). This key derivation function makes four evaluations of AES, truncates the outputs to $n/2$ bits, and concatenates these to get a $2n$-bit subkey.

In this work, we demonstrate that truncation is wasteful. In more detail, we present the Summation-Truncation Hybrid (STH). At a high level, the construction consists of two parallel evaluations of truncation, where the truncated $(n-a)$-bit chunks are not discarded but rather summed together and appended to the output. We prove that STH achieves a similar security level as truncation, and thus that the $n-a$ bits of extra output is rendered for free. In the application of GCM-SIV, the current key derivation can be used to output $3n$ bits of random material, or it can be reduced to three primitive evaluations. Both changes come with no security loss.
Expand
Aldo Gunsing, Bart Mennink
ePrint Report ePrint Report
One oft-endeavored security property for cryptographic hash functions is collision resistance: it should be computationally infeasible to find distinct inputs $x,x'$ such that $H(x) = H(x')$, where $H$ is the hash function. Unruh (EUROCRYPT 2016) proposed collapseability as its quantum equivalent. The Merkle-Damgård and sponge hashing modes have recently been proven to be collapseable under the assumption that the underlying primitive is collapseable. These modes are inherently sequential. In this work, we investigate collapseability of tree hashing. We first consider fixed length tree hashing modes, and derive conditions under which their collapseability can be reduced to the collapseability of the underlying compression function. Then, we extend the result to two methods for achieving variable length hashing: tree hashing with domain separation between message and chaining value, and tree hashing with length encoding at the end of the tree. The proofs are performed using the collapseability composability framework of Fehr (TCC 2018), that allows us to discard of deeply technical quantum details and to focus on proper composition of the tree hashes from their compression function.
Expand
Aldo Gunsing, Joan Daemen, Bart Mennink
ePrint Report ePrint Report
We present two tweakable wide block cipher modes from doubly-extendable cryptographic keyed (deck) functions and a keyed hash function: double-decker and docked-double-decker. Double-decker is a direct generalization of Farfalle-WBC of Bertoni et al. (ToSC 2017(4)), and is a four-round Feistel network on two arbitrarily large branches, where the middle two rounds call deck functions and the first and last rounds call the keyed hash function. Docked-double-decker is a variant of double-decker where the bulk of the input to the deck functions is moved to the keyed hash functions. We prove that the distinguishing advantage of the resulting wide block ciphers is simply two times the sum of the pseudorandom function distinguishing advantage of the deck function and the blinded keyed hashing distinguishing advantage of the keyed hash functions. We demonstrate that blinded keyed hashing is more general than the conventional notion of XOR-universality, and that it allows us to instantiate our constructions with keyed hash functions that have a very strong claim on bkh security but not necessarily on XOR-universality, such as Xoofffie (ePrint 2018/767). The bounds of double-decker and docked-double-decker are moreover reduced tweak-dependent, informally meaning that collisions on the keyed hash function for different tweaks only have a limited impact. We describe two use cases that can exploit this property opportunistically to get stronger security than what would be achieved with prior solutions: SSD encryption, where each sector can only be written to a limited number of times, and incremental tweaks, where one includes the state of the system in the variable-length tweak and appends new data incrementally.
Expand
Hannah Davis, Denis Diemert, Felix Günther, Tibor Jager
ePrint Report ePrint Report
The pre-shared key (PSK) handshake modes of TLS 1.3 allow for the performant, low-latency resumption of previous connections and are widely used on the Web and by resource-constrained devices, e.g., in the Internet of Things. Taking advantage of these performance benefits with optimal and theoretically-sound parameters requires tight security proofs. We give the first tight security proofs for the TLS 1.3 PSK handshake modes.

Our main technical contribution is to address a gap in prior tight security proofs of TLS 1.3 which modeled either the entire key schedule or components thereof as independent random oracles to enable tight proof techniques. These approaches ignore existing interdependencies in TLS 1.3's key schedule, arising from the fact that the same cryptographic hash function is used in several components of the key schedule and the handshake more generally. We overcome this gap by proposing a new abstraction for the key schedule and carefully arguing its soundness via the indifferentiability framework. Interestingly, we observe that for one specific configuration, PSK-only mode with hash function SHA-384, it seems difficult to argue indifferentiability due to a lack of domain separation between the various hash function usages. We view this as an interesting insight for the design of protocols, such as future TLS versions.

For all other configurations however, our proofs significantly tighten the security of the TLS 1.3 PSK modes, confirming standardized parameters (for which prior bounds provided subpar or even void guarantees) and enabling a theoretically-sound deployment.
Expand
Katharina Boudgoust, Corentin Jeudy, Adeline Roux-Langlois, Weiqiang Wen
ePrint Report ePrint Report
The Module Learning With Errors problem (M-LWE) has gained popularity in recent years for its security-efficiency balance, and its hardness has been established for a number of variants. In this paper, we focus on proving the hardness of (search) M-LWE for general secret distributions, provided they carry sufficient min-entropy. This is called entropic hardness of M-LWE. First, we adapt the line of proof of Brakerski and Döttling on R-LWE (TCC’20) to prove that the existence of certain distributions implies the entropic hardness of M-LWE. Then, we provide one such distribution whose required properties rely on the hardness of the decisional Module-NTRU problem.
Expand
Suvradip Chakraborty, Bernardo Magri, Jesper Buus Nielsen, Daniele Venturi
ePrint Report ePrint Report
Subversion attacks undermine security of cryptographic protocols by replacing a legitimate honest party's implementation with one that leaks information in an undetectable manner. An important limitation of all currently known techniques for designing cryptographic protocols with security against subversion attacks is that they do not automatically guarantee security in the realistic setting where a protocol session may run concurrently with other protocols.

We remedy this situation by providing a foundation of *reverse firewalls* (Mironov and Stephens-Davidowitz, EUROCRYPT'15) in the *universal composability* (UC) framework (Canetti, FOCS'01 and J. ACM'20). More in details, our contributions are threefold:

- We generalize the UC framework to the setting where each party consists of a core (which has secret inputs and is in charge of generating protocol messages) and a firewall (which has no secrets and sanitizes the outgoing/incoming communication from/to the core). Both the core and the firewall can be subject to different flavors of corruption, modeling different kinds of subversion attacks.

For instance, we capture the setting where a subverted core looks like the honest core to any efficient test, yet it may leak secret information via covert channels (which we call *specious subversion*).

- We show how to sanitize UC commitments and UC coin tossing against specious subversion, under the DDH assumption.

- We show how to sanitize the classical GMW compiler (Goldreich, Micali and Wigderson, STOC 1987) for turning MPC with security in the presence of semi-honest adversaries into MPC with security in the presence of malicious adversaries. This yields a completeness theorem for maliciously secure MPC in the presence of specious subversion.

Additionally, all our sanitized protocols are *transparent*, in the sense that communicating with a sanitized core looks indistinguishable from communicating with an honest core. Thanks to the composition theorem, our methodology allows, for the first time, to design subversion-resilient protocols by sanitizing different sub-components in a modular way.
Expand
Ling Sun, Bart Preneel, Wei Wang, Meiqin Wang
ePrint Report ePrint Report
GIFT-64 is a 64-bit block cipher with a 128-bit key that is more lightweight than PRESENT. This paper provides a detailed analysis of GIFT-64 against differential and linear attacks. Our work complements automatic search methods for the best differential and linear characteristics with a careful manual analysis. This hybrid approach leads to new insights. In the differential setting, we theoretically explain the existence of differential characteristics with two active S-boxes per round and derive some novel properties of these characteristics. Furthermore, we prove that all optimal differential characteristics of GIFT-64 covering more than seven rounds must activate two S-boxes per round. We can construct all optimal characteristics by hand. In parallel to the work in the differential setting, we conduct a similar analysis in the linear setting. However, unlike the clear view in differential setting, the optimal linear characteristics of GIFT-64 must have at least one round activating only one S-box. Moreover, with the assistance of automatic searching methods, we identify 24 GIFT-64 variants achieving better resistance against differential attack while maintaining a similar security level against a linear attack. Since the new variants strengthen GIFT-64 against statistical cryptanalysis, we claim that the number of rounds could be reduced from 28 to 26 for the variants. This observation enables us to create a cipher with lower energy consumption than GIFT-64. Similarly to the case in GIFT-64, we do not claim any related-key security for the round-reduced variant as this is not relevant for most applications.
Expand
Ignacio Cascudo, Bernardo David, Lydia Garms, Anders Konring
ePrint Report ePrint Report
Achieving adaptive (or proactive) security in cryptographic protocols is notoriously difficult due to the adversary's power to dynamically corrupt parties as the execution progresses. Inspired by the work of Benhamouda et al. in TCC 2020, Gentry et al. in CRYPTO 2021 introduced the YOSO (You Only Speak Once) model for constructing adaptively (or proactively) secure protocols in massively distributed settings (e.g. blockchains). In this model, instead of having all parties execute an entire protocol, smaller anonymous committees are randomly chosen to execute each individual round of the protocol. After playing their role, parties encrypt protocol messages towards the the next anonymous committee and erase their internal state before publishing their ciphertexts. However, a big challenge remains in realizing YOSO protocols: efficiently encrypting messages towards anonymous parties selected at random without learning their identities, while proving the encrypted messages are valid w.r.t. the protocol. In particular, the protocols of Benhamouda et al. and of Gentry et al. require showing ciphertexts contain valid shares of secret states. We propose concretely efficient methods for encrypting a protocol's secret state towards a random anonymous committee. We start by proposing a very simple and efficient scheme for encrypting messages towards randomly and anonymously selected parties. We then show constructions of publicly verifiable secret (re-)sharing (PVSS) schemes with concretely efficient proofs of (re-)share validity that can be generically instantiated from encryption schemes with certain linear homomorphic properties. Finally, we show that our PVSS schemes can be efficiently realized from our encyption scheme.
Expand

25 February 2022

John Kelsey, Stefan Lucks
ePrint Report ePrint Report
We show how to construct a threshold version of stateful hash-based signature schemes like those defined in XMSS (defined in RFC8391) and LMS (defined in RFC8554). Our techniques assume a trusted dealer and secure point-to-point communications; are efficient in terms of communications and computation; and require at least one party to have a large (but practical) amount of storage. We propose the addition of an untrusted Helper to manage the large storage required without being given access to any secret information. We prove the security of our schemes in a straightforward way, reducing their strength to that of the underlying hash-based signature scheme. Our schemes are quite practical, and substantially decrease the risk of accidental key reuse in hash-based signature schemes.
Expand
Hamza Abusalah, Georg Fuchsbauer, Peter Gaži, Karen Klein
ePrint Report ePrint Report
The success of blockchains has led to ever-growing ledgers that are stored by all participating full nodes. In contrast, light clients only store small amounts of blockchain-related data and rely on the mediation of full nodes when interacting with the ledger. A broader adoption of blockchains calls for protocols that make this interaction trustless.

We revisit the design of light-client blockchain protocols from the perspective of classical proof-system theory, and explain the role that proofs of sequential work (PoSWs) can play in it. To this end, we define a new primitive called succinct non-interactive argument of chain knowledge (SNACK), a non-interactive proof system that provides clear security guarantees to a verifier (a light client) even when interacting only with a single dishonest prover (a full node). We show how augmenting any blockchain with any graph-labeling PoSW (GL-PoSW) enables SNACK proofs for this blockchain. We also provide a unified and extended definition of GL-PoSWs covering all existing constructions, and describe two new variants. We then show how SNACKs can be used to construct light-client protocols, and highlight some deficiencies of existing designs, along with mitigations. Finally, we introduce incremental SNACKs which could provide a new approach to light mining.
Expand
Ziyu Zhao, Jintai Ding
ePrint Report ePrint Report
Lattice problem such as NTRU problem and LWE problem is widely used as the security base of post-quantum cryptosystems. And currently doing lattice reduction by BKZ algorithm is the most efficient way to solve it. In this paper, we give several further improvements on BKZ algorithm, which can be used for different SVP subroutines base on both enumeration and sieving. These improvements in combination provide a speed up of $2^{3\sim 4}$ in total. It is significant in concrete attacks. Using these new techniques, we solved the 656 dimensional ideal lattice challenge in only 380 thread hours (also with a enumeration based SVP subroutine), much less than the previous records (which costs 4600 thread hours in total). With these improvements enabled, we can still simulate the new BKZ algorithm easily. One can also use this simulator to find the blocksize strategy (and the corresponding cost) to make $\mathrm{Pot}$ of the basis (defined in section 5.2) decrease as fast as possible, which means the length of the first basis vector decrease the fastest if we accept the GSA assumption. It is useful for analyzing concrete attacks on lattice-based cryptography.
Expand
Lijing Zhou, Ziyu Wang, Xiao Zhang, Yu Yu
ePrint Report ePrint Report
Fully homomorphic encryption (FHE) provides a natural solution for privacy-preserving outsourced computation, but a straightforward FHE-based protocol may suffer from high computational overhead and large ciphertext expansion rate, especially for computation-intensive tasks over large data, which are the main obstacles towards practical outsourced computation. In this paper, we present HEAD, a generic outsourced computation protocol that can be based on most mainstream (typically a BGV or GSW style scheme) FHE schemes with more compact storage and less computational costs than the straightforward FHE-based counterpart. In particular, our protocol enjoys a ciphertext/plaintext expansion rate of 1 (i.e., no expansion) at the server side. This is achieved by means of ``pseudorandomly masked'' ciphertexts, and the efficient transformations of them into FHE ciphertexts to facilitate privacy-preserving computation. Depending on the underlying FHE in use, our HEAD protocol can be instantiated with the three masking techniques, namely modulo-subtraction-masking, modulo-division-masking, and XOR-masking, to support the decimal integer, real, or binary messages. Thanks to these masking techniques, various homomorphic computation tasks are made more efficient and less noise accumulative. We evaluate the performance of our protocol on BFV, BGV, CKKS, and FHEW schemes based on the PALISADE and SEAL libraries, which confirms the theoretical analysis of the reduction computation costs and noise. For example, the computation time in our BFV tests in an x86 server for the sum or product of eight ciphertexts is reduced from 336.3 ms to 6.3 ms, or from 1219.4 ms to 9.5 ms, respectively. Furthermore, our multi-input masking and unmasking operations are more flexible than the FHE SIMD-batching, by supporting an on-demand configuration of FHE during each outsourced computation request.
Expand
Jesper Buus Nielsen, João Ribeiro, Maciej Obremski (randomized author ordering)
ePrint Report ePrint Report
We distill a simple information-theoretic model for randomness extraction motivated by the task of generating publicly verifiable randomness in blockchain settings and which is closely related to You-Only-Speak-Once (YOSO) protocols (CRYPTO 2021). With the goal of avoiding denial-of-service attacks, parties speak only once and in sequence by broadcasting a public value and forwarding secret values to future parties. Additionally, an unbounded adversary can corrupt any chosen subset of at most $t$ parties. In contrast, existing YOSO protocols only handle random corruptions. As a notable example, considering worst-case corruptions allows us to reduce trust in the role assignment mechanism, which is assumed to be perfectly random in YOSO.

We study the maximum corruption threshold $t$ which allows for unconditional randomness extraction in our model:

- With respect to feasibility, we give protocols for $t$ corruptions and $n=6t+1$ or $n=5t$ parties depending on whether the adversary learns secret values forwarded to corrupted parties immediately once they are sent or only once the corrupted party is executed, respectively. Both settings are motivated by practical implementations of secret value forwarding. To design such protocols, we go beyond the committee-based approach that is sufficient for random corruptions in YOSO but turns out to be sub-optimal for chosen corruptions.

- To complement our protocols, we show that low-error randomness extraction is impossible with corruption threshold $t$ and $n \leq 4t$ parties in both settings above. This also provides a separation between chosen and random corruptions, since the latter allows for randomness extraction with close to $n/2$ random corruptions.
Expand
Tristan NEMOZ, Zoé AMBLARD, Aurélien DUPIN
ePrint Report ePrint Report
We extend the work performed by Anand, Targhi, Tabia and Unruh (PQCrypto 2016) of studying the post-quantum security of the CBC, CFB, OFB and CTR modes of operation by considering all possible notions of qIND-qCPA security defined by Carstens, Ebrahimi, Tabia and Unruh (TCC 2021).

We show that the results obtained by Anand et al. for the qIND-qCPA-P6 security of these modes carry on to the others IND-qCPA notions, namely the qIND-qCPA-P10 and qIND-qCPA-P11 ones. We also show that CFB, CTR and OFB are insecure according to all of the other notions, regardless of the block cipher they are used with. We provide several results concerning the (in)security of CBC. First of all, we show that it is insecure according to the qIND-qCPA-P9 notion. By distinguishing on the nature of the underlying block cipher, we prove its qIND-qCPA-P5 security when based upon a qPRP and we prove that it can be qIND-qCPA-P13 insecure when based upon a PRP, thus fully characterizing it. We illustrate the later result by using as a counter-example the same block cipher used by Anand et al.
Expand
Giuseppe Persiano, Kevin Yeo
ePrint Report ePrint Report
We present a lower bound for the static cryptographic data structure problem of single-server private information retrieval (PIR). PIR considers the setting where a server holds a database of $n$ entries and a client wishes to privately retrieve the $i$-th entry without revealing the index $i$ to the server. In our work, we focus on PIR with preprocessing where an $r$-bit hint may be computed in a preprocessing stage and stored by the server to be used to perform private queries in expected time $t$. We consider the public preprocessing setting of Beimel et al. [JoC, 2004] where the hint is publicly available to everyone including the adversary.

We prove that for any single-server computationally secure PIR with preprocessing it must be that $tr = \Omega(n \log n)$ when $r = \Omega(\log n)$. If $r = O(\log n)$, we show that $t = \Omega(n)$. Our lower bound holds even when the scheme errs with probability $1/n^2$ and the adversary’s distinguishing advantage is $1/n$. Our work improves upon the $tr = \Omega(n)$ lower bound of Beimel et al. [JoC, 2004].

We prove our lower bound in a variant of the cell probe model where only accesses to the memory are charged cost and computation and accesses to the hint are free. Our main technical contribution is a novel use of the cell sampling technique (also known as the incompressibility technique) used to obtain lower bounds on data structures. In previous works, this technique only leveraged the correctness guarantees to prove lower bounds even when used for cryptographic primitives. Our work combines the cell sampling technique with the privacy guarantees of PIR to construct a powerful, polynomial-time adversary that is critical to proving our higher lower bounds.
Expand
Luca De Feo, Antonin Leroux, Benjamin Wesolowski
ePrint Report ePrint Report
The Deuring correspondence defines a bijection between isogenies of supersingular elliptic curves and ideals of maximal orders in a quaternion algebra. We present a new algorithm to translate ideals of prime-power norm to their corresponding isogenies --- a central task of the effective Deuring correspondence. The new method improves upon the algorithm introduced in 2021 by De Feo, Kohel, Leroux, Petit and Wesolowski as a building-block of the SQISign signature scheme. SQISign is the most compact post-quantum signature scheme currently known, but is several orders of magnitude slower than competitors, the main bottleneck of the computation being the ideal-to-isogeny translation. We implement the new algorithm and apply it to SQISign, achieving a more than twofold speed-up in key generation and signing. Verification time is not directly impacted by the change, however we also achieve a twofold speed-up through various other improvements.

In a second part of the article, we advance cryptanalysis by showing a very simple distinguisher against one of the assumptions used in SQISign. We present a way to impede the distinguisher through a few changes to the generic KLPT algorithm. We formulate a new assumption capturing these changes, and provide an analysis together with experimental evidence for its validity.
Expand
Martin R. Albrecht, Miloš Prokop, Yixin Shen, Petros Wallden
ePrint Report ePrint Report
A fundamental computational problem is to find a shortest non-zero vector in Euclidean lattices, a problem known as the Shortest Vector Problem (SVP). This problem is believed to be hard even on quantum computers and thus plays a pivotal role in post-quantum cryptography. In this work we explore how (efficiently) Noisy Intermediate Scale Quantum (NISQ) devices may be used to solve SVP. Specifically, we map the problem to that of finding the ground state of a suitable Hamiltonian. In particular, (i) we establish new bounds for lattice enumeration, this allows us to obtain new bounds (resp. estimates) for the number of qubits required per dimension for any lattices (resp. random q-ary lattices) to solve SVP; (ii) we exclude the zero vector from the optimization space by proposing (a) a different classical optimisation loop or alternatively (b) a new mapping to the Hamiltonian. These improvements allow us to solve SVP in dimension up to 28 in a quantum emulation, significantly more than what was previously achieved, even for special cases. Finally, we extrapolate the size of NISQ devices that is required to be able to solve instances of lattices that are hard even for the best classical algorithms and find that with ≈ 10^3 noisy qubits such instances can be tackled.
Expand
◄ Previous Next ►