International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

10 February 2020

Abida Haque, Alessandra Scafuro
ePrint Report ePrint Report
A $t$-out-of-$N$ threshold ring signature allows $t$ parties to jointly and anonymously compute a signature on behalf on $N$ public keys, selected in an arbitrary manner among the set of all public keys registered in the system.

Existing definitions for $t$-out-of-$N$ threshold ring signatures guarantee security only when the public keys are honestly generated, and many even restrict the ability of the adversary to actively participate in the computation of the signatures. Such definitions do not capture the open settings envisioned for threshold ring signatures, where parties can independently add themselves to the system, and join other parties for the computation of the signature.

Furthermore, known constructions of threshold ring signatures are not provably secure in the post-quantum setting, either because they are based on non-post quantum secure problems (e.g. Discrete Log, RSA), or because they rely on transformations such as Fiat-Shamir, that are not always secure in the quantum random oracle model (QROM).

In this paper, we provide the first definition of $t$-out-of-$N$ threshold ring signatures against {\em active} adversaries who can participate in the system and arbitrarily deviate from the prescribed procedures. Second, we present a post-quantum secure realization based on {\em any} (post-quantum secure) trapdoor commitment, which we prove secure in the QROM. Our construction is black-box and it can be instantiated with any trapdoor commitment, thus allowing the use of a variety of hardness assumptions.
Expand
Vipul Goyal, Yifan Song
ePrint Report ePrint Report
We study the communication complexity of unconditionally secure MPC over point-to-point channels for corruption threshold t < n/2. We ask the question: "is it possible to achieve security-with-abort with the same concrete cost as the best-known semi-honest MPC protocol?" While a number of works have focused on improving the concrete efficiency in this setting, the answer to the above question has remained elusive until now.

We resolve the above question in the affirmative by providing a secure-with-abort MPC protocol with the same cost per gate as the best-known semi-honest protocol. Concretely, our protocol only needs 5.5 field elements per multiplication gate per party which matches (and even improves upon) the corresponding cost of the best known protocol in the semi-honest setting by Damgard and Nielsen. Previously best-known maliciously secure (with abort) protocols require 12 field elements. An additional feature of our protocol is its conceptual simplicity. Our result is achieved by relying on a novel technique of verifying a batch of multiplication tuples.
Expand
Souradyuti Paul, Ananya Shrivastava
ePrint Report ePrint Report
In ACM CCS'17, Choudhuri et al. designed two fair public-ledger-based multi-party protocols (in the malicious model with dishonest majority) for computing an arbitrary function $f$. One of their protocols is based on a trusted hardware enclave $G$ (which can be implemented using Intel SGX-hardware) and a public ledger (which can be implemented using a blockchain platform, such as Ethereum). Subsequently, in NDSS'19, a stateless version of the protocol was published. This is the first time, (a certain definition of) fairness -- that guarantees either all parties learn the final output or nobody does -- is achieved without any monetary or computational penalties. However, these protocols are fair, if the underlying core MPC component guarantees both privacy and correctness. While privacy is easy to achieve (using a secret sharing scheme), correctness requires expensive operations (such as ZK proofs and commitment schemes). We improve on this work in three different directions: attack, design and performance.

Our first major contribution is building practical attacks that demonstrate: if correctness is not satisfied then the fairness property of the aforementioned protocols collapse. Next, we design two new protocols -- stateful and stateless -- based on public ledger and trusted hardware that are: resistant against the aforementioned attacks, and made several orders of magnitude more efficient (related to both time and memory) than the existing ones by eliminating ZK proofs and commitment schemes in the design.

Last but not the least, we implemented the core MPC part of our protocols using the SPDZ-2 framework to demonstrate the feasibility of its practical implementation.
Expand
Dario Fiore, Anca Nitulescu, David Pointcheval
ePrint Report ePrint Report
We consider the setting in which an untrusted server stores a collection of data and is asked to compute a function over it. In this scenario, we aim for solutions where the untrusted server does not learn information about the data and is prevented from cheating. This problem is addressed by verifiable and private delegation of computation, proposed by Gennaro, Gentry and Parno (CRYPTO’10), a notion that is close to both the active areas of homomorphic encryption and verifiable computation (VC). However, in spite of the efficiency advances in the respective areas, VC protocols that guarantee privacy of the inputs are still expensive. The only exception is a protocol by Fiore, Gennaro and Pastro (CCS’14) that supports arithmetic circuits of degree at most 2. In this paper we propose new efficient protocols for VC on encrypted data that improve over the state of the art solution of Fiore et al. in multiple aspects. First, we can support computations of degree higher than 2. Second, we achieve public delegatability and public verifiability whereas Fiore et al. need the same secret key to encode inputs and verify outputs. Third, we achieve a new property that guarantees that verifiers can be convinced about the correctness of the outputs without learning information on the inputs. The key tool to obtain our new protocols is a new SNARK that can efficiently handle computations over a quotient polynomial ring, such as the one used by Ring-LWE somewhat homomorphic encryption schemes. This SNARK in turn relies on a new commit-and-prove SNARK for proving evaluations on the same point of several committed polynomials. We propose a construction of this scheme under an extractability assumption over bilinear groups in the random oracle model.
Expand
Hamidreza Amini Khorasgani, Hemanta K. Maji, Mingyuan Wang
ePrint Report ePrint Report
There is a significant interest in securely computing functionalities with guaranteed output delivery, a.k.a, fair computation. For example, consider a 2-party $n$-round coin-tossing protocol in the information-theoretic setting. Even if one party aborts during the protocol execution, the other party has to receive her outcome. Towards this objective, every round, the sender of that round's message, preemptively prepares a defense coin, which is her output if the other party aborts prematurely. Cleve and Impagliazzo (1993), Beimel, Haitner, Makriyannis, and Eran Omri (2018), and Khorasgani, Maji, and Mukherjee (2019) show that a fail-stop adversary can alter the distribution of the outcome by $\Omega(1/\sqrt{n})$.

However, preparing the defense coin is computationally expensive. So, the parties would prefer to update their defense coin only sparingly or when indispensable. Furthermore, if parties delegate their coin-tossing task to an external server, it is even infeasible for the parties to stay abreast of the progress of the protocol and keep their defense coins in sync with the protocol evolution. Therefore, this paper considers lazy coin-tossing protocols, where parties update their defense coins only a total of $d$ times during the protocol execution. Is it possible that using only $d \ll n$ defense coin updates a fair coin-tossing protocol is robust to $\mathcal{O}({1/\sqrt{n}})$ change in their output distribution?

This paper proves that being robust to $\mathcal{O}({1/\sqrt{n}})$ change in the output distribution necessarily requires that the defense complexity $d=\Omega(n)$, thus ruling out the possibility mentioned above. More generally, our work proves that a fail-stop adversary can bias the outcome distribution of a coin-tossing protocol by $\Omega({1/\sqrt{d}})$, a qualitatively better attack than the previous state-of-the-art when $d=o({n})$. That is, the defense complexity of a coin-tossing protocol, not its round complexity, determines its security. We emphasize that the rounds where parties calculate their defense coins need not be a priori fixed; they can depend on the protocol's evolution itself. Finally, we translate this fail-stop adversarial attack into black-box separation results for lazy coin-tossing protocols.

The proof relies on an inductive argument using a carefully crafted potential function to precisely account for the quality of the best attack on coin-tossing protocols. Previous approaches fail when the protocol evolution reveals information about the defense coins of both the parties, which is inevitable in lazy coin-tossing protocols. Our analysis decouples the defense complexity of coin-tossing protocols from its round complexity to guarantee fail-stop attacks whose performance depends only on the defense complexity of the coin-tossing protocol; irrespective of their round complexity.
Expand
Elette Boyle, Ran Cohen, Aarushi Goel
ePrint Report ePrint Report
Byzantine agreement (BA), the task of $n$ parties to agree on one of their input bits in the face of malicious agents, is a powerful primitive that lies at the core of virtually every multi-party cryptographic protocol. Understanding the required communication complexity of BA as a function of $n$ is the subject of a rich line of research.

Interestingly, in protocols with the best overall communication complexity, the communication demands of the parties are highly unbalanced: the amortized cost is $\tilde O(1)$ bits per party, but some parties must send $\Omega(n)$ bits (e.g., Braud-Santoni et al., PODC'13). In best known balanced protocols, the overall communication is sub-optimal, with each party communicating $\tilde O(\sqrt{n})$ (e.g., King et al., ICDCN'11).

In this work, we ask whether asymmetry is inherent for optimizing total communication. In particular, is BA possible where each party sends and receives only $\tilde O(1)$ bits? Our contributions in this line are as follows:

1) We identify a cryptographic primitive---succinctly reconstructed distributed signatures (SRDS)---that suffices for constructing $\tilde O(1)$ balanced BA. We provide two constructions of SRDS: from one-way functions in a trustfully generated Public-Key Infrastructure (PKI) model, and from a strong form of succinct non-interactive arguments of knowledge in a weaker PKI model.

2) The SRDS-based BA follows a paradigm of boosting from "almost-everywhere" agreement (where $1-o(1)$ fraction of parties agree) to full agreement, and does so in a single round. Complementarily, we prove that PKI setup and cryptographic assumptions (alternatively, an even stronger, correlated-randomness setup assumption) are necessary for such protocols in which every party sends $o(n)$ messages.

3) We further explore connections between a natural approach toward attaining SRDS and average-case succinct non-interactive argument systems for a particular type of "Subset-$f$" problems (generalizing Subset-Sum and Subset-Product).

Collectively, our results provide an initial mapping for the feasibility landscape of $\tilde O(1)$ balanced BA, including new approaches forward, as well as limitations and barriers. Our approach yields the first two BA protocols with $\tilde O(1)$ balanced communication, offering a tradeoff between setup and cryptographic assumptions, and answering an open question presented by King and Saia (DISC'09).
Expand
Juliane Krämer, Patrick Struck
ePrint Report ePrint Report
The security proofs of post-quantum cryptographic schemes often consider only classical adversaries. Therefore, whether such schemes are really post-quantum secure remains unknown until the proofs take quantum adversaries into account. Switching to a quantum adversary might require to adapt the security notion. In particular, post-quantum security proofs for schemes which use random oracles have to be in the quantum random oracle model (QROM), while classical security proofs are in the random oracle model (ROM). We remedy this state of affairs by introducing a framework to obtain the post-quantum security of public key encryption schemes which use random oracles. We define a class of encryption schemes, called oracle-simple, and identify game hops which are used to prove such schemes secure in the ROM. For these game hops, we state both simple and sufficient conditions to validate that a proof also holds in the QROM. The strength of our framework lies in its simplicity, its generality, and its applicability. We demonstrate this by applying it to the code-based encryption scheme ROLLO (Round 2 NIST candidate) and the lattice-based encryption scheme LARA (FC 2019). This proves that both schemes are post-quantum secure, which had not been shown before.
Expand
University of Tartu, Estonia
Job Posting Job Posting
The Cryptography, Coding and Information Transmission Group at the University of Tartu, Estonia, is looking for a doctorate student for a project on coding for flash memories. The ideal candidate will have strength in one or more of the following areas:
• Any area related to coding theory
• Combinatorics
• Algebra
• Probability theory
• Algorithms and programming.
The income is at least 1060 euro per month (net) plus social benefits. Some travel money will also be available. Cost of living in Estonia is relatively low, see e.g. https://www.ut.ee/en/welcome/international-students and http://www.expatistan.com/cost-of-living . A successful candidate should:
• Hold a Master’s degree or equivalent before the start of studies (Fall 2020)
• Have a strong background in mathematics, computer science or a related field.
To apply, please submit the following documents (by email):
• Application letter
• Research statement (preliminary plan)
• Curriculum vitae
• Publication list or example publication / thesis (optional)
• Copies of documents about academic degrees
• Two letters of reference (make sure they reach us by the application deadline).
Deadline for applications: 24th February 2020. Do not hesitate to contact us in case of questions. Contact: Ago-Erik Riet, e-mail agoerik at ut.ee, see https://www.etis.ee/Portal/Persons/Display/fa2b849b-a7cb-4463-aefd-01603d27ebc6?tabId=CV_ENG , https://www.ut.ee/en/kontakt/matemaatika-statistika-instituut and http://cit.cs.ut.ee/ . For a formal advertisement please also see: https://www.ut.ee/en/phd-mathematics .

Closing date for applications:

Contact: Ago-Erik Riet

Expand

07 February 2020

Cispa, Helmholtz center for Cybersecurity
Job Posting Job Posting
In the context of the Almacrypt ERC, CISPA has several postdoctoral positions related to the study of cryptographic hard problems.
The proposed topics are discrete logarithms and factoring, lattice reduction, multivariate equation systems, elliptic curves and combinatorial problems of cryptographic relevance.
Strong applicants in symmetric cryptography might also be considered.
Candidates should have a PhD in Cryptology, Number theory or theoretical Computer Science.
The application should contain a research program for the desired length of the position (up to three years).
All applications should be through the provided web-interface.

Closing date for applications:

Contact: Antoine Joux

More information: https://jobs.cispa.saarland/jobs/detail/post-doc-in-mathematical-and-or-algorithmic-cryptography-38

Expand
Cispa, Helmholtz center for Cybersecurity
Job Posting Job Posting
In the context of the Almacrypt ERC, CISPA has several phD positions related to the study of cryptographic hard problems. The proposed topics are discrete logarithms and factoring, lattice reduction, multivariate equation systems, elliptic curves and combinarial problems of cryptographic relevance.
Candidates should have a strong background in Mathematics or theoretical Computer Science. Knowledge of cryptology would be a plus.
The application should specify (and justify) the specific domain of interest of the candidate.
All applications should be through the provided web-interface.

Closing date for applications:

Contact: Antoine Joux

More information: https://jobs.cispa.saarland/jobs/detail/phd-in-mathematical-and-or-algorithmic-cryptography-39

Expand

06 February 2020

Ward Beullens, Cyprien Delpech de Saint Guilhem
ePrint Report ePrint Report
We introduce an efficient post-quantum signature scheme that relies on the one-wayness of the Legendre PRF. This "LEGendRe One-wAyness SignaTure" (LegRoast) builds upon the MPC-in-the-head technique to construct an efficient zero-knowledge proof, which is then turned into a signature scheme with the Fiat-Shamir transform. Unlike many other Fiat-Shamir signatures, the security of LegRoast can be proven without using the forking lemma, and this leads to a tight (classical) ROM proof. We also introduce a generalization that relies on the one-wayness of higher-power residue characters; the "POwer Residue ChaRacter One-wAyness SignaTure" (PorcRoast).

LegRoast outperforms existing MPC-in-the-head-based signatures (most notably Picnic/Picnic2) in terms of signature size and speed. Moreover, PorcRoast outperforms LegRoast by a factor of 2 in both signature size and signing time. For example, one of our parameter sets targeting NIST security level I results in a signature size of 7.2 KB and a signing time of 2.8ms. This makes PorcRoast the most efficient signature scheme based on symmetric primitives to date.
Expand
Véronique Cortier, Joseph Lallemand, Bogdan Warinschi
ePrint Report ePrint Report
We propose a framework for the analysis of electronic voting schemes in the presence of malicious bulletin boards. We identify a spectrum of notions where the adversary is allowed to tamper with the bulletin board in ways that reflect practical deployment and usage considerations. To clarify the security guarantees provided by the different notions we establish a relation with simulation-based security with respect to a family of ideal functionalities. The ideal functionalities make clear the set of authorised attacker capabilities which makes it easier to understand and compare the associated levels of security. We then leverage this relation to show that each distinct level of ballot privacy entails some distinct form of individual verifiability. As an application, we study three protocols of the literature (Helios, Belenios, and Civitas) and identify the different levels of privacy they offer.
Expand
Shi-Feng Sun, Amin Sakzad, Ron Steinfeld, Joseph Liu, Dawu Gu
ePrint Report ePrint Report
We revisit the method of designing public-key puncturable encryption schemes and present a generic conversion by leveraging the techniques of distributed key-distribution and revocable encryption. In particular, we first introduce a refined version of identity-based revocable encryption, named key-homomorphic identity-based revocable key encapsulation mechanism with extended correctness. Then, we propose a generic construction of puncturable key encapsulation mechanism from the former by merging the idea of distributed key-distribution. Compared to the state-of-the-art, our generic construction supports unbounded number of punctures and multiple tags per message, thus achieving more fine-grained revocation of decryption capability. Further, it does not rely on random oracles, not suffer from non-negligible correctness error, and results in a variety of efficient schemes with distinct features. More precisely, we obtain the first scheme with very compact ciphertexts in the standard model, and the first scheme with support for both unbounded size of tags per ciphertext and unbounded punctures as well as constant-time puncture operation. Moreover, we get a comparable scheme proven secure under the standard DBDH assumption, which enjoys both faster encryption and decryption than previous works based on the same assumption, especially when the number of tags associated with the ciphertext is large.
Expand
Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Enoch Peserico, Elaine Shi
ePrint Report ePrint Report
In tight compaction, one is given an array of balls some of which are marked 0 and the rest are marked 1. The output of the procedure is an array that contains all of the original balls except that now the 0-balls appear before the 1-balls. In other words, tight compaction is equivalent to sorting the array according to 1-bit keys (not necessarily maintaining order within same-key balls). Tight compaction is not only an important algorithmic task by itself, but its oblivious version has also played a key role in recent constructions of oblivious RAM compilers.

We present an oblivious deterministic algorithm for tight compaction such that for input arrays of $n$ balls requires $O(n)$ total work and $O(\log n)$ depth. Our algorithm is in the EREW Parallel-RAM model (i.e., the most restrictive PRAM model), and importantly we achieve asymptotical optimality in both total work and depth. To the best of our knowledge no earlier work, even when allowing randomization, can achieve optimality in both total work and depth.
Expand
Ali El Kaafarani, Shuichi Katsumata, Federico Pintore
ePrint Report ePrint Report
Recently, Beullens, Kleinjung, and Vercauteren (Asiacrypt'19) provided the first practical isogeny-based digital signature, obtained from the Fiat-Shamir (FS) paradigm. They worked with the CSIDH-512 parameters and passed through a new record class group computation. However, as with all standard FS signatures, the security proof is highly non-tight and the concrete parameters are set under the heuristic that the only way to attack the scheme is by finding collisions for a hash function. In this paper, we propose an FS-style signature scheme, called Lossy CSI-FiSh, constructed using the CSIDH-512 parameters and with a security proof based on the "Lossy Keys" technique introduced by Kiltz, Lyubashevsky and Schaffner (Eurocrypt'18). Lossy CSI-FiSh is provably secure under the same assumption which underlies the security of the key exchange protocol CSIDH (Castryck et al. (Asiacrypt'18)) and is almost as efficient as CSI-FiSh. For instance, aiming for small signature size, our scheme is expected to take around $\approx 800$ms to sign/verify while producing signatures of size $\approx 280$ bytes. This is only twice slower than CSI-FiSh while having similar signature size for the same parameter set. As an additional benefit, our scheme is by construction secure both in the classical and quantum random oracle model.
Expand
Alberto Garoffolo, Dmytro Kaidalov, Roman Oliynykov
ePrint Report ePrint Report
Sidechains are an appealing innovation devised to enable blockchain scalability and extensibility. The basic idea is simple yet powerful: construct a parallel chain - sidechain - with desired features, and provide a way to transfer coins between the mainchain and the sidechain.

In this paper, we introduce Zendoo, a construction for Bitcoin-like blockchain systems that allows the creation and communication with sidechains of different types without knowing their internal structure. We consider a parent-child relationship between the mainchain and sidechains, where sidechain nodes directly observe the mainchain while mainchain nodes only observe cryptographically authenticated certificates from sidechain maintainers. We use zk-SNARKs to construct a universal verifiable transfer mechanism that is used by sidechains.

Moreover, we propose a specific sidechain construction, named Latus, that can be built on top of this infrastructure, and realizes a decentralized verifiable blockchain system for payments. We leverage the use of recursive composition of zk-SNARKs to generate succinct proofs of sidechain state progression that are used to generate certificates’ validity proofs. This allows the mainchain to efficiently verify all operations performed in the sidechain without knowing any details about those operations.
Expand
Najmeh Soroush, Vincenzo Iovino, Alfredo Rial, Peter B. Roenne, Peter Y. A. Ryan
ePrint Report ePrint Report
In the standard setting of functional encryption (FE), we assume both the Central Authority (CA) and the encryptors to run their respective algorithms faithfully. Badrinarayanan et al [ASIACRYPT 2016] put forth the concept of verifiable FE, which essentially guarantees that dishonest encryptors and authorities, even when colluding together, are not able to generate ciphertexts and tokens that give inconsistent results. They also provide a compiler turning any perfectly correct FE into a verifiable FE, but do not give efficient constructions.

In this paper we improve on this situation by considering Inner-Product Encryption (IPE), which is a special case of functional encryption and a primitive that has attracted wide interest from both practitioners and researchers in the last decade. Specifically, we construct the first efficient verifiable IPE (VIPE) scheme according to the inner-product functionality of Katz, Sahai, and Waters [EUROCRYPT 2008]. To instantiate the general construction of Badrinarayanan et al, we need to solve several additional challenges. In particular, we construct the first efficient perfectly correct IPE scheme. Our VIPE satisfies unconditional verifiability, whereas its privacy relies on the DLin assumption.
Expand
Hao Chen, Ilia Iliashenko, Kim Laine
ePrint Report ePrint Report
We demonstrate how to reduce the memory overhead of somewhat homomorphic encryption (SHE) while computing on numerical data. We design a hybrid SHE scheme that exploits the packing algorithm of the HEAAN scheme and the variant of the FV scheme by Bootland et al. The ciphertext size of the resulting scheme is 3-18 times smaller than in HEAAN to compute polynomial functions of depth 4 while packing a small number of data values. Furthermore, our scheme has smaller ciphertexts even with larger packing capacities (256-2048 values).
Expand
Léo Ducas, Thijs Laarhoven, Wessel P.J. van Woerden
ePrint Report ePrint Report
Following the recent line of work on solving the closest vector problem with preprocessing (CVPP) using approximate Voronoi cells, we improve upon previous results in the following ways:

- We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis--Laarhoven--De Weger [PQCrypto 2019] and Laarhoven~[MathCrypt 2019].

- We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities.

- We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker--Gama--Joux [Cryptology ePrint Archive, 2015]. Using $2^{0.185d + o(d)}$ memory, we can solve a single CVPP instance in $2^{0.264d + o(d)}$ time.

- We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using $2^{0.208d + o(d)}$ memory, we can heuristically solve CVPP instances in $2^{0.234d + o(d)}$ amortized time, for batches of size at least $2^{0.058d + o(d)}$.

Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure.
Expand
Zvika Brakerski, Nico Döttling
ePrint Report ePrint Report
The hardness of the Learning with Errors (LWE) problem is by now a cornerstone of the cryptographic landscape. In many of its applications the so called ``LWE secret'' is not sampled uniformly, but comes from a distribution with some min-entropy. This variant, known as ``Entropic LWE'', has been studied in a number of works, starting with Goldwasser et al. (ICS 2010). However, so far it was only known how to prove the hardness of Entropic LWE for secret distributions supported inside a ball of small radius.

In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ball-bounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highly-composite moduli, and using black-box impossibility for arbitrary moduli.

Technically, we show that the entropic hardness of LWE relies on a simple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distribution $s$, given $s+e$, where $e$ is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ball-bounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.
Expand
◄ Previous Next ►