International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

16 July 2020

Emanuele Strieder, Christoph Frisch, Michael Pehl
ePrint Report ePrint Report
Physical Unclonable Functions (PUFs) are used in various key-generation schemes and protocols. Such schemes are deemed to be secure even for PUFs with challenge-response behavior, as long as no responses and no reliability information about the PUF are exposed. This work, however, reveals a pitfall in these con- structions: When using state-of-the-art helper data algorithms to correct noisy PUF responses, an attacker can exploit the publicly accessible helper data and challenges. We show that with this public information and the knowledge of the underlying error correcting code, an attacker can break the security of the system: The redundancy in the error correcting code reveals machine learnable features and labels. Learning these features and labels results in a predictive model for the dependencies between different challenge-response pairs (CRPs) without direct access to the actual PUF response. We provide results based on simulated data of a k-SUM PUF model and an Arbiter PUF model. The analysis reveals that especially the frequently used repetition code is vulnerable: Already the observation of 800 challenges and helper data bits suffices to reduce the entropy of the key down to one bit in this case. The analysis also shows that even other linear block codes like the BCH, the Reed-Muller, or the Single Parity Check code are affected by the problem. The code dependent insights we gain from the analysis allow us to suggest mitigation strategies for the identified attack. While the shown vulnerability brings Machine Learning (ML) one step closer to realistic attacks on key-storage systems with PUFs, our analysis also allows for a better understanding and evaluation of existing approaches and protocols with PUFs. Therefore, it brings the community a step further towards a more complete leakage assessment of PUFs.
Expand
Michele Ciampi, Nikos Karayannidis, Aggelos Kiayias, Dionysis Zindros
ePrint Report ePrint Report
Software updates for blockchain systems become a real challenge when they impact the underlying consensus mechanism. The activation of such changes might jeopardize the integrity of the blockchain by resulting in chain splits. Moreover, the software update process should be handed over to the community and this means that the blockchain should support updates without relying on a trusted party. In this paper, we introduce the notion of updatable blockchains and show how to construct blockchains that satisfy this definition. Informally, an updatable blockchain is a secure blockchain and in addition it allows to update its protocol preserving the history of the chain. In this work, we focus only on the processes that allow securely switching from one blockchain protocol to another assuming that the blockchain protocols are correct. That is, we do not aim at providing a mechanism that allows reaching consensus on what is the code of the new blockchain protocol. We just assume that such a mechanism exists (like the one proposed in NDSS 2019 by Zhang et. al), and show how to securely go from the old protocol to the new one. The contribution of this paper can be summarized as follows. We provide the first formal definition of updatable ledgers and propose the description of two compilers. These compilers take a blockchain and turn it into an updatable blockchain. The first compiler requires the structure of the current and the updated blockchain to be very similar (only the structure of the blocks can be different) but it allows for an update process more simple, efficient. The second compiler that we propose is very generic (i.e., makes few assumptions on the similarities between the structure of the current blockchain and the update blockchain). The drawback of this compiler is that it requires the new blockchain to be resilient against a specific adversarial behaviour and requires all the honest parties to be online during the update process. However, we show how to get rid of the latest requirement (the honest parties being online during the update) in the case of proof-of-work and proof-of-stake ledgers.
Expand
Keita Emura, Atsushi Takayasu, Yohei Watanabe
ePrint Report ePrint Report
Revocable identity-based encryption (RIBE) is an extension of IBE with an efficient key revocation mechanism. Revocable hierarchical IBE (RHIBE) is its further extension with key delegation functionality. Although there are various adaptively secure pairing-based RIBE schemes, all known hierarchical analogs only satisfy selective security. In addition, the currently known most efficient adaptively secure RIBE and selectively secure RHIBE schemes rely on non-standard assumptions, which are referred to as the augmented DDH assumption and $q$-type assumptions, respectively. In this paper, we propose a simple but effective design methodology for RHIBE schemes. We provide a generic design framework for RHIBE based on an HIBE scheme with a few properties. Fortunately, several state-of-the-art pairing-based HIBE schemes have the properties. In addition, our construction preserves the sizes of master public keys, ciphertexts, and decryption keys, as well as the complexity assumptions of the underlying HIBE scheme. Thus, we obtain the first RHIBE schemes with adaptive security under the standard $k$-linear assumption. We prove adaptive security by developing a new proof technique for RHIBE. Due to the compactness-preserving construction, the proposed R(H)IBE schemes have similar efficiencies to the most efficient existing schemes.
Expand
Klaus Kursawe
ePrint Report ePrint Report
The advent of decentralized trading markets introduces a number of new challenges for consensus protocols. In addition to the 'usual' attacks - a subset of the validators trying to prevent disagreement -- there is now the possibility of financial fraud, which can abuse properties not normally considered critical in consensus protocols. We investigate the issues of attackers manipulating or exploiting the order in which transactions are scheduled in the blockchain. More concretely, we look into relative order fairness, i.e., ways we can assure that the relative order of transactions is fair. We show that one of the more intuitive definitions of fairness is impossible to achieve. We then present Wendy, a group of low overhead protocols that can implement different concepts of fairness. Wendy acts as an aditional widget for an existing blockchain, and is largely agnostic to the underlying blockchain and its security assumptions. Furthermore, it is possible to apply a the protocol only for a subset of the transactions, and thus run several independent fair markets on the same chain.
Expand
Linru Zhang, Xiangning Wang, Yuechen Chen, Siu-Ming Yiu
ePrint Report ePrint Report
We propose a leakage-resilient inner-product functional encryption scheme (IPFE) in the bounded-retrieval model (BRM). This is the first leakage-resilient functional encryption scheme in the BRM. In our leakage model, an adversary is allowed to obtain at most $l$-bit knowledge from each secret key. And our scheme can flexibly tolerate arbitrarily leakage bound $l$, by only increasing the size of secret keys, while keeping all other parts small and independent of $l$.

Technically, we develop a new notion: Inner-product hash proof system (IP-HPS). IP-HPS is a variant of traditional hash proof systems. Its output of decapsulation is an inner-product value, instead of the encapsulated key. We propose an IP-HPS scheme under DDH-assumption. Then we show how to make an IP-HPS scheme to tolerate $l'$-bit leakage, and we can achieve arbitrary large $l'$ by only increasing the size of secret keys. Finally, we show how to build a leakage-resilient IPFE in the BRM with leakage bound $l=\frac{l'}{n}$ from our IP-HPS scheme.
Expand
Jeroen Delvaux
ePrint Report ePrint Report
In an article from HOST 2018, which appears in extended form in the Cryptology ePrint Archive, Baksi, Bhasin, Breier, Khairallah, and Peyrin proposed the tweak-in-plaintext method to protect block ciphers against a differential fault analysis (DFA). We argue that this method lacks existential motivation as neither of its two envisioned use cases, i.e., the electronic codebook (ECB) and the cipher block chaining (CBC) modes of operation, is competitive. Furthermore, in a variant of the method where nonces are generated using a linear-feedback shift register (LFSR), several security problems have not been anticipated for. Finally, we analyze the security level against a brute-force DFA more rigorously than in the original work.
Expand
Willy Susilo, Dung Hoang Duong, Huy Quoc Le, Josef Pieprzyk
ePrint Report ePrint Report
Puncturable encryption (PE), proposed by Green and Miers at IEEE S&P 2015, is a kind of public key encryption that allows recipients to revoke individual messages by repeatedly updating decryption keys without communicating with senders. PE is an essential tool for constructing many interesting applications, such as asynchronous messaging systems, forward-secret zero round-trip time protocols, public-key watermarking schemes and forward-secret proxy re-encryptions. This paper revisits PEs from the observation that the puncturing property can be implemented as efficiently computable functions. From this view, we propose a generic PE construction from the fully key-homomorphic encryption, augmented with a key delegation mechanism (DFKHE) from Boneh et al. at Eurocrypt 2014. We show that our PE construction enjoys the selective security under chosen plaintext attacks (that can be converted into the adaptive security with some efficiency loss) from that of DFKHE in the standard model. Basing on the framework, we obtain the first post-quantum secure PE instantiation that is based on the learning with errors problem, selective secure under chosen plaintext attacks (CPA) in the standard model. We also discuss about the ability of modification our framework to support the unbounded number of ciphertext tags inspired from the work of Brakerski and Vaikuntanathan at CRYPTO 2016.
Expand
Loïc Masure, Nicolas Belleville, Eleonora Cagli, Marie-Angela Cornelie, Damien Couroussé, Cécile Dumas, Laurent Maingault
ePrint Report ePrint Report
Code polymorphism is a way to efficiently address the challenge of automatically applying the hiding of sensitive information leakage, as a way to protect cryptographic primitives against side-channel attacks (SCA) involving layman adversaries. Yet, recent improvements in SCA, involving more powerful threat models, e.g., using deep learning, emphasized the weaknesses of some hiding counter-measures. This raises two questions. On the one hand, the security of code polymorphism against more powerful attackers, which has never been addressed so far, might be affected. On the other hand, using deep learning SCA on code polymorphism would require to scale the state-of-the-art models to much larger traces than considered so far in the literature. Such a case typically occurs with code polymorphism due to the unknown precise location of the leakage from one execution to another. We tackle those questions through the evaluation of two polymorphic implementations of AES, similar to the ones used in a recent paper published in TACO 2019 [6]. We show on our analysis how to efficiently adapt deep learning models used in SCA to scale on traces 32 folds larger than what has been done so far in the literature. Our results show that the targeted polymorphic implementations are broken within 20 queries with the most powerful threat models involving deep learning, whereas 100,000 queries would not be sufficient to succeed the attacks previously investigated against code polymorphism. As a consequence, this paper pushes towards the search of new polymorphic implementations secured against state-of-the-art attacks, which currently remains to be found.
Expand
Palash Sarkar, Subhadip Singha
ePrint Report ePrint Report
Regev (2005) introduced the learning with errors (LWE) problem and showed a quantum reduction from a worst case lattice problem to LWE. Building on the work of Peikert (2009), a classical reduction from the shortest vector problem to LWE was obtained by Brakerski et al. (2013). A concrete security analysis of Regev's reduction by Chatterjee et al. (2016) identified a huge tightness gap. The present work performs a concrete analysis of the tightness gap in the classical reduction of Brakerski et al. It turns out that the tightness gap in the Brakerski et al. classical reduction is even larger than the tightness gap in the quantum reduction of Regev. This casts doubts on the implication of the reduction to security assurance of practical cryptosystems.
Expand
Annapurna Valiveti, Srinivas Vivek
ePrint Report ePrint Report
Masking by lookup table randomisation is a well-known technique used to achieve side-channel attack resistance for software implementations, particularly, against DPA attacks. The randomised table technique for first- and second-order security requires about m * 2^n bits of RAM to store an (n, m)-bit masked S-box lookup table. Table compression helps in reducing the amount of memory required, and this is useful for highly resource-constrained IoT devices. Recently, Vadnala (CT-RSA 2017) proposed a randomised table compression scheme for first- and second-order security in the probing leakage model. This scheme reduces the RAM memory required by about a factor of 2^l, where l is a compression parameter. Vivek (Indocrypt 2017) demonstrated an attack against the second-order scheme of Vadnala. Hence achieving table compression at second and higher orders is an open problem.

In this work, we propose a second-order secure randomised table compression scheme which works for any (n, m)-bit S-box. Our proposal is a variant of Vadnala's scheme that is not only secure but also significantly improves the time-memory trade-off. Specifically, we improve the online execution time by a factor of 2^(n-l). Our proposed scheme is proved 2-SNI secure in the probing leakage model. We have implemented our method for AES-128 on a 32-bit ARM Cortex processor. We are able to reduce the memory required to store a randomised S-box table for second-order AES-128 implementation to 59 bytes.
Expand
Sankhanil De, Ranjan Ghosh
ePrint Report ePrint Report
crypto 4-bit substitution boxes or crypto 4-bit S-boxes are used in block ciphers for nonlinear substitution very frequently. If the 16 elements of a 4-bit S-box are unique, distinct and vary between 0 and f in hex then the said 4-bit S-box is called as a crypto 4-bit S-box. There are 16! crypto 4-bit S-boxes available in crypto literature. Other than crypto 4-bit S-boxes there are another type of 4-bit S-boxes exist. In such 4-bit S-boxes 16 elements of the 4-bit S-box are not unique and distinct i.e. at least one element must repeat more than one time. They are called as non-crypto 4-bit S-boxes. There are 16^16-factorial 16 Numbers of non-crypto 4-bit S-boxes can be found in crypto-literature. The non-crypto 4-bit S-boxes can be generated from 4-bit Boolean Functions (BFs) in the same manner as crypto 4-bit S-boxes are generated in [1]. But to generate crypto 4-bit S-boxes the security of the generated 4-bit S-boxes is sacrificed into some extend. Since 12870 4-bit balanced BFs are responsible for factorial 16 crypto 4-bit S-boxes and the nonlinearity of the balanced 4-bit BFs are at most 4. So the 4-bit BFs with highest nonlinearity 6 are left abandoned. These 4-bit BFs are called as 4-bit Bent BFs. Here in this paper we generate non-crypto 4-bit S-boxes from 4-bit Bent BFs. The generated non-crypto 4-bit S-boxes are analyzed with the existing cryptanalysis techniques to prove them much secure 4-bit S-boxes from crypto angle.
Expand

13 July 2020

Tampere University
Job Posting Job Posting

The Network and Information Security Group is currently looking for several motivated and talented researchers at all levels (PhD, PostDoc) to contribute to research projects related to applied cryptography, hardware security, security and privacy. The successful candidates will primarily be working on the following topics (but not limited to):

  • Differential Privacy;
  • Functional Encryption;
  • Privacy-Preserving Analytics;
  • Privacy-Preserving Machine Learning;
  • Searchable Encryption and data structures enabling efficient search operations on encrypted data;
  • Processing of encrypted data in outsourced and untrusted environments;
  • Applying encrypted search techniques to Trusted Execution Environments;
  • Revocable Attribute-Based Encryption schemes and their application to cloud services;
  • IoT Security and Applications to Smart Cities;
  • Side Channel Analysis (SCA);
  • Machine Learning based SCA;
  • Embedded security (e.g. ARM-based SoC);
  • TEE security and development (e.g. TrustZone, Trusted Applications, etc.).

Programming skills is a must.

The positions are principa research-focused. Activities include:

  • Conducting both theoretical and applied research;
  • Design of secure and/or privacy-preserving protocols;
  • Software development and validation;
  • Reading and writing scientific articles;
  • Presentation of the research results at seminars and conferences in Finland and abroad;
  • Acquiring (or assisting in acquiring) further funding.

Successful candidates will be working in EU and industrial research projects. Topics will be spanning from the theoretical foundations of cryptography to the design and implementation of provable secure communication protocols with direct applications to smart cities, cloud computing and eHealth.

To apply please send the following:

  • Your latest CV;
  • A research statement (max 2 pages long);
  • The three best papers you have co-authored.

Closing date for applications:

Contact:

  • Billy Bob Brumley (Hardware Security and SCA): billy.brumley@tuni.fi
  • Antonis Michalas (Provable Security and Privacy): antonios.michalas@tuni.fi

More information: https://research.tuni.fi/vision/open-positions-2020/

Expand

12 July 2020

Marios Georgiou, Mark Zhandry
ePrint Report ePrint Report
We initiate the study of encryption schemes where the decryption keys are unclonable quantum objects, which we call single decryptor encryption. We give a number of initial results in this area:

-We formalize the notion of single decryptor encryption. -We show that secret-key single decryptor encryption is possible unconditionally, in the setting where a limited number of ciphertexts are given. However, given an encryption oracle, we show that unconditional security is impossible. -We show how to use a very recent notion of one-shot signatures, together with sufficiently powerful witness encryption, to achieve public key single decryptor encryption. -We demonstrate several extensions of our scheme, achieving a number of interesting properties that are not possible classically.
Expand
Claude Carlet, Sylvain Guilley, Sihem Mesnager
ePrint Report ePrint Report
Internet of Things is developing at a very fast rate. In order to ensure security and privacy, end-devices (e.g. smartphones, smart sensors, or any connected smartcards) shall be protected both against cyber attacks (coming down from the network) and against physical attacks (arising from attacker low-level interaction with the device). In this context, proactive protections shall be put in place to mitigate information theft from either side-channel monitoring or active computation/data corruption. Although both countermeasures have been developing fast and have become mature, there has surprisingly been little research to combine both.

In this article, we tackle this difficult topic and highlight a viable solution. It is shown to be more efficient than mere fault detection by repetition (which is anyway prone to repeated correlated faults). The presented solution leverages the fact that both side-channel protection and fault attack detection are coding techniques. We explain how to both prevent (higher-order) side-channel analyses and detect (higher-order) fault injection attacks. The specificity of this method is that it works ``end-to-end'', meaning that the detection can be delayed until the computation is finished. This simplifies considerably the error management logic as there is a single verification throughout the computation.
Expand
Daiki Hayashida, Kenichiro Hayasaka, Tadanori Teruya
ePrint Report ePrint Report
The final exponentiation, which is the exponentiation by a fixed large exponent, must be performed in the Tate and (optimal) Ate pairing computation to ensure output uniqueness, algorithmic correctness, and security for pairing-based cryptography. In this paper, we propose a new framework of efficient final exponentiation for pairings over families of elliptic curves. Our framework provides two methods: the first method supports families of elliptic curves with arbitrary embedding degrees, and the second method supports families with specific embedding degrees of providing even faster algorithms. Applying our framework to several Barreto-Lynn-Scott families, we obtain faster final exponentiation than the previous state-of-the-art constructions.
Expand
Susan Hohenberger, Brent Waters
ePrint Report ePrint Report
We put forward a new abstraction for achieving forward-secure signatures that are (1) short, (2) have fast update and signing and (3) have small private key size. Prior work that achieved these parameters was pioneered by the pebbling techniques of Itkis and Reyzin (CRYPTO 2001) which showed a process for generating a sequence of roots $h^{1/e_1}, h^{1/e_2}, \dots, h^{1/e_T}$ for a group element $h$ in $\mathbb{Z}_N^*$. However, the current state of the art has limitations.

First, while many works claim that Itkis-Reyzin pebbling can be applied, it is seldom shown how this non-trivial step is concretely done. Second, setting up the pebbling data structure takes $T$ time which makes key generation using this approach expensive. Third, many past works require either random oracles and/or the Strong RSA assumption; we will work in the standard model under the RSA assumption.

We introduce a new abstraction that we call an RSA sequencer. Informally, the job of an RSA sequencer is to store roots of a public key $U$, so that at time period $t$, it can provide $U^{1/e_t}$, where the value $e_t$ is an RSA exponent computed from a certain function. This separation allows us to focus on building a sequencer that efficiently stores such values, in a forward-secure manner and with better setup times than other comparable solutions. Our sequencer abstraction also has certain re-randomization properties that allow for constructing forward-secure signatures with a single trusted setup that takes $T$ time and individual key generation takes $\lg(T)$ time.

We demonstrate the utility of our abstraction by using it to provide concrete forward-secure signature schemes. We first give a random-oracle construction that closely matches the performance and structure of the Itkis-Reyzin scheme with the important exception that key generation is much faster (after the one-time setup). We then move on to designing a standard model scheme. This abstraction and illustration of how to use it may be useful for other future works.

We include a detailed performance evaluation of our constructions, with an emphasis on the time and space costs for large caps on the maximum number of time periods $T$ supported. Our philosophy is that frequently updating forward secure keys should be part of ``best practices'' in key maintenance. To make this practical, even for bounds as high as $T=2^{32}$, we show that after an initial global setup, it takes only seconds to generate a key pair, and only milliseconds to update keys, sign messages and verify signatures. The space requirements for the public parameters and private keys are also a modest number of kilobytes, with signatures being a single element in $\mathbb{Z}_N$ and one smaller value.
Expand
Julia Bobrysheva, Sergey Zapechnikov
ePrint Report ePrint Report
Progress in quantum technologies forces the development of new cryptographic primitives that are resistant to attacks of an adversary with a quantum computer. A large number of key establishment schemes have been proposed for two participants, but the area of group post-quantum key establishment schemes has not been studied a lot. Not so long ago, an isogeny-based key agreement scheme was proposed for three participants, based on a gradual increase in the degree of the key. We propose another principle for establishing a key for a group of participants using a tree-structure. The proposed key establishment scheme for four participants uses isogeny of elliptic curves as a mathematical tool.
Expand
Gabriel Zaid, Lilian Bossuet, François Dassance, Amaury Habrard, Alexandre Venelli
ePrint Report ePrint Report
The side-channel community recently investigated a new approach, based on deep learning, to significantly improve profiled attacks against embedded systems. Compared to template attacks, deep learning techniques can deal with protected implementations, such as masking or desynchronization, without substantial pre-processing. However, important issues are still open. One challenging problem is to adapt the methods classically used in the machine learning field (e.g. loss function, performance metrics) to the specific side-channel context in order to obtain optimal results. We propose a new loss function derived from the learning to rank approach that helps preventing approximation and estimation errors, induced by the classical cross-entropy loss. We theoretically demonstrate that this new function, called Ranking Loss (RkL), maximizes the success rate by minimizing the ranking error of the secret key in comparison with all other hypotheses. The resulting model converges towards the optimal distinguisher when considering the mutual information between the secret and the leakage. Consequently, the approximation error is prevented. Furthermore, the estimation error, induced by the cross-entropy, is reduced by up to 23%. When the ranking loss is used, the convergence towards the best solution is up to 23% faster than a model using the cross-entropy loss function. We validate our theoretical propositions on public datasets.
Expand
Qipeng Liu, Amit Sahai, Mark Zhandry
ePrint Report ePrint Report
One-time memories (OTM) are the hardware version of oblivious transfer, and are useful for constructing objects that are impossible with software alone, such as one-time programs. In this work, we consider attacks on OTMs where a quantum adversary can leverage his physical access to the memory to mount quantum ``superposition attacks'' against the memory. Such attacks result in significantly weakened OTMs. For example, in the application to one-time programs, it may appear that such an adversary can always “quantumize” the classical protocol by running it on a superposition of inputs, and therefore learn superpositions of outputs of the protocol.

Perhaps surprisingly, we show that this intuition is false: we construct one-time programs from quantum-accessible one-time memories where the view of an adversary, despite making quantum queries, can be simulated by making only classical queries to the ideal functionality. At the heart of our work is a method of immunizing one-time memories against superposition attacks.
Expand
Yu Yu, Jiang Zhang
ePrint Report ePrint Report
Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90's. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic primitives such as fully homomorphic encryption [Gentry, STOC 2009]. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate $\frac{log^2 n}{n}$ implies the quasi-polynomial hardness of LPN at the extremely high noise rate $1/2-1/poly(n)$. It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness.

In this paper, we carry on the worst-case to average-case reduction for LPN [BLVW19]. We first expand the underlying binary linear codes (of the worst-case NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant $0<c<1$ the constant-noise LPN problem is ($T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{min(c,1-c)})},q=2^{\Omega(n^{min(c,1-c)})}$)-hard assuming that the NCP (on either code) at the low-noise rate $\tau=n^{-c}$ is ($T'={2^{\Omega(\tau n)}}$, $\epsilon'={2^{-\Omega(\tau n)}}$,$m={2^{\Omega(\tau n)}}$)-hard in the worst case, where $T$, $\epsilon$, $q$ and $m$ are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions would need constant-noise LPN with ($T={2^{\omega(\sqrt{n})}}$, $\epsilon'={2^{-\omega(\sqrt{n})}}$,$q={2^{\sqrt{n}}}$)-hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary $\omega(1)$ factor in the exponent) what is reducible from the worst-case NCP when $c= 0.5$. We leave it as an open problem whether the gap can be closed or there is a separation in place.
Expand
◄ Previous Next ►