International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

10 May 2022

Donghoon Chang, Deukjo Hong, Jinkeon Kang
ePrint Report ePrint Report
Ascon-128 and Ascon-80pq use 12-round Ascon permutation for initialization and finalization phases and 6-round Ascon permutation for processing associate data and message. In a nonce-misuse setting, we present a new partial-state-recovery conditional-cube attack on Ascon-128 and Ascon-80pq, where 192 bits out of 320-bit state are recovered. For our partial state-recovery attack, its required data complexity, \(D\), is about \(2^{44.8}\) and its required memory complexity, \(M\), is negligible. After a 192-bit partial state is recovered, in a nonce-misuse setting, we can further recover the full 320-bit state with time complexity, \(T=2^{128}\), and then we can recover the secret key with extra data complexity of \(2^{31.5}\), extra time complexity of \(2^{129.5}\), and memory complexity of \(2^{31.5}\). A similar attack of recovering the partial state was independently developed by Baudrin et al. at NIST fifth Lightweight Cryptography workshop. Note that our attack does not violate the NIST LWC security requirements on Ascon-128 and Ascon-80pq as well as the designers' claims.
Expand
Aram Jivanyan, Aaron Feickert
ePrint Report ePrint Report
Electronic voting has long been an area of active and challenging research. Security properties relevant to physical voting in elections with a variety of threat models and priorities are often difficult to reproduce in cryptographic systems and protocols. Existing work in this space often focuses on the privacy of ballot contents, assurances to voters that their votes are tabulated, and verification that election results are correct; however, privacy of voter identity is often offloaded to trust requirements on election organizers or tallying authorities, or implies other kinds of trust related to cryptographic construction instantiation. Here we introduce Aura, an election protocol that reduces trust on tallying authorities and organizers while ensuring voter privacy. Ballots in Aura are dissociated from voter identity cryptographically, use verifiable encryption and threshold decryption to diffuse trust in tallying authorities, require no trusted setup for cryptographic primitives, and use efficient proving systems to reduce computation and communication complexity. These properties make Aura a competitive candidate for use in a variety of applications where trust minimization is desirable or necessary.
Expand
Mathias Hall-Andersen, Jesper Buus Nielsen
ePrint Report ePrint Report
In his landmark paper at TCC 2008 Paul Valiant introduced the notion of ``incrementally verifiable computation'' which enables a prover to incrementally compute a succinct proof of correct execution of a (potentially) long running process. The paper later won the 2019 TCC test of time award. The construction was proven secure in the random oracle model without any further computational assumptions. However, the overall proof was given using a non-standard version of the random-oracle methodology where sometimes the hash function is a random oracle and sometimes it has a short description as a circuit. Valiant clearly noted that this model is non-standard, but conjectured that the standard random oracle methodology would not suffice. This conjecture has been open for 14 years. We prove that under some mild extra assumptions on the proof system the conjecture is true: the standard random-oracle model does not allow incrementally verifiable computation without making computational assumptions. Two extra assumptions under which we can prove the conjecture are 1) the proof system is also zero-knowledge or 2) when the proof system makes a query to its random oracle it can know with non-negligible probability whether the query is fresh or was made by the proof system earlier in the construction of the proof.
Expand
Sandro Coretti, Aggelos Kiayias, Cristopher Moore, Alexander Russell
ePrint Report ePrint Report
One of the most successful applications of peer-to-peer communication networks is in the context of blockchain protocols, which—in Satoshi Nakamoto's own words—rely on the "nature of information being easy to spread and hard to stifle." Significant efforts were invested in the last decade into analyzing the security of these protocols, and invariably the security arguments known for longest-chain Nakamoto-style consensus use an idealization of this tenet.

Unfortunately, the real-world implementations of peer-to-peer gossip-style networks used by blockchain protocols rely on a number of ad-hoc attack mitigation strategies that leave a glaring gap between the idealized communication layer assumed in formal security arguments for blockchains and the real world, where a wide array of attacks have been showcased.

In this work we bridge this gap by presenting a Byzantine-resilient network layer for blockchain protocols. For the first time we quantify the problem of network-layer attacks in the context of blockchain security models, and we develop a design that thwarts resource restricted adversaries.

Importantly, we focus on the proof-of-stake setting due to its vulnerability to Denial-of-Service (DoS) attacks stemming from the well-known deficiency (compared to the proof-of-work setting) known as nothing at stake.

We present a Byzantine-resilient gossip protocol, and we analyze it in the Universal Composition framework. In order to prove security, we show novel results on expander properties of random graphs. Importantly, our gossip protocol can be based on any given bilateral functionality that determines a desired interaction between two "adjacent" peers in the networking layer and demonstrates how it is possible to use application-layer information to make the networking-layer resilient to attacks.

Despite the seeming circularity, we demonstrate how to prove the security of a Nakamoto-style longest-chain protocol given our gossip networking functionality, and hence, we demonstrate constructively how it is possible to obtain provable security across protocol layers, given only bare-bone point-to-point networking, majority of honest stake, and a verifiable random function.
Expand
Katarzyna Anna Kowalska, Davide Fogliano, Jose Garcia Coello
ePrint Report ePrint Report
At Crypta Labs we are developing Quantum Random Number Generator technology and are using different random number test suites to assess the quality of our products. Among these is the NIST 800-22 suite. When testing our datasets, we found that we were consistently failing one particular test: the Overlapping Template Matching test. This was surprising to us, so we fed data from a known PRNG source into the same test and discovered that NIST approved PRNG was also failing in a similar fashion. At this point we decided to debug NIST's code. We did indeed find an error within the probability calculations and, once corrected, ran the tests again and passed. The code for this test had previously been revised by NIST due to an incorrect calculation of the probabilities, however, later in the revised source code the corrected calculations were calculated again using the originally incorrect formulas, and these overwrote the revised fix. Furthermore, the NIST 800-22 Test suite is currently under revision and our paper is a contribution towards it.
Expand
Yawning Angel, Benjamin Dowling, Andreas Hülsing, Peter Schwabe, Florian Weber
ePrint Report ePrint Report
We introduce PQNoise, a post-quantum variant of the Noise framework. We demonstrate that it is possible to replace the Diffie-Hellman key-exchanges in Noise with KEMs in a secure way. A challenge is the inability to combine key pairs of KEMs, which can be resolved by certain forms of randomness-hardening for which we introduce a formal abstraction. We provide a generic recipe to turn classical Noise patterns into PQNoise patterns. We prove that the resulting PQNoise patterns achieve confidentiality and authenticity in the fACCE-model. Moreover we show that for those classical Noise-patterns that have been conjectured or proven secure in the fACCE-model our matching PQNoise-patterns eventually achieve the same security. Our security proof is generic and applies to any valid PQNoise pattern. This is made possible by another abstraction, called a hash-object, which hides the exact workings of how keying material is processed in an abstract stateful object that outputs pseudorandom keys under different corruption patterns. We also show that the hash chains used in Noise are a secure hash-object. Finally, we demonstrate the practicality of PQNoise delivering benchmarks for several base patterns.
Expand
Patrick Karl, Jonas Schupp, Tim Fritzmann, Georg Sigl
ePrint Report ePrint Report
CRYSTALS-Dilithium and Falcon are digital signature algorithms based on cryptographic lattices, that are considered secure even if large-scale quantum computers will be able to break conventional public-key cryptography. Both schemes are third round candidates in the ongoing NIST post-quantum competition. In this work, we present a RISC-V HW/SW codesign that aims to combine the advantages of software- and hardware implementations, i.e. flexibility and performance. It shows the use of flexible hardware accelerators, which have been previously used for Public-Key Encryption (PKE) and Key-Encapsulation Mechanism (KEM), for post-quantum signatures. It is optimized for Dilithium as a generic signature scheme but also accelerates applications that require fast verification of Falcon’s compact signatures. We provide a comparison with previous works showing that for Dilithium and Falcon, cycle counts are significantly reduced, such that our design is faster than previous software implementations or other HW/SW codesigns. In addition to that, we present a compact Globalfoundries 22 nm ASIC design that runs at 800 MHz. By using hardware acceleration, energy consumption for Dilithium is reduced by up to 92.2%, and up to 67.5% for Falcon’s signature verification.
Expand
Jincheol Ha, Seongkwang Kim, Byeonghak Lee, Jooyoung Lee, Mincheol Son
ePrint Report ePrint Report
A transciphering framework converts a symmetric ciphertext into a homomorphic ciphertext on the server-side, reducing computational and communication overload on the client-side. In Asiacrypt 2021, Cho et al. proposed the RtF framework that supports approximate computation.

In this paper, we propose a family of noisy ciphers, dubbed Rubato, with a novel design strategy of introducing noise to a symmetric cipher of a low algebraic degree. With this strategy, the multiplicative complexity of the cipher is significantly reduced, compared to existing HE-friendly ciphers, without degrading the overall security. More precisely, given a moderate block size (16 to 64), Rubato enjoys a low multiplicative depth (2 to 5) and a small number of multiplications per encrypted word (2.1 to 6.25) at the cost of slightly larger ciphertext expansion (1.26 to 1.31). The security of Rubato is supported by comprehensive analysis including symmetric and LWE cryptanalysis. Compared to HERA within the RtF framework, client-side and server-side throughput is improved by 22.9% and 32.2%, respectively, at the cost of only 1.6% larger ciphertext expansion.
Expand
Sabyasachi Dey, Hirendra Kumar Garai, Santanu Sarkar, Nitin Kumar Sharma
ePrint Report ePrint Report
In this paper, we provide several improvements over the existing differential-linear attacks on ChaCha. ChaCha is a stream cipher which has $20$ rounds. At CRYPTO $2020$, Beierle et al. observed a differential in the $3.5$-th round if the right pairs are chosen. They produced an improved attack using this, but showed that to achieve a right pair, we need $2^5$ iterations on average. In this direction, we provide a technique to find the right pairs with the help of listing. Also, we provide a strategical improvement in PNB construction, modification of complexity calculation and an alternative attack method using two input-output pairs. Using these, we improve the time complexity, reducing it to $2^{221.95}$ from $2^{230.86}$ reported by Beierle et al. for $256$ bit version of ChaCha. Also, after a decade, we improve existing complexity (Shi et al: ICISC 2012) for a $6$-round of $128$ bit version of ChaCha by more than 11 million times and produce the first-ever attack on 6.5-round ChaCha$128$ with time complexity $2^{123.04}.$
Expand
Damiano Abram, Peter Scholl, Sophia Yakoubov
ePrint Report ePrint Report
Structured random strings (SRSs) and correlated randomness are important for many cryptographic protocols. In settings where interaction is expensive, it is desirable to obtain such randomness in as few rounds of communication as possible; ideally, simply by exchanging one reusable round of messages which can be considered public keys.

In this paper, we describe how to generate any SRS or correlated randomness in such a single round of communication, using, among other things, indistinguishability obfuscation. We introduce what we call a distributed sampler, which enables $n$ parties to sample a single public value (SRS) from any distribution. We construct a semi-malicious distributed sampler in the plain model, and use it to build a semi-malicious public-key PCF (Boyle et al, FOCS 2020) in the plain model. A public-key PCF can be thought of as a distributed correlation sampler; instead of producing a public SRS, it gives each party a private random value (where the values satisfy some correlation).

We introduce a general technique called an anti-rusher which compiles any one-round protocol with semi-malicious security without inputs to a similar one-round protocol with active security by making use of a programmable random oracle. This gets us actively secure distributed samplers and public-key PCFs in the random oracle model.

Finally, we explore some tradeoffs. Our first PCF construction is limited to reverse-sampleable correlations (where the random outputs of honest parties must be simulatable given the random outputs of corrupt parties); we additionally show a different construction without this limitation, but which does not allow parties to hold secret parameters of the correlation. We also describe how to avoid the use of a random oracle at the cost of relying on sub-exponentially secure indistinguishability obfuscation.
Expand
Renas Bacho (CISPA Helmholtz Center for Information Security), Julian Loss (CISPA Helmholtz Center for Information Security)
ePrint Report ePrint Report
Threshold signatures are a crucial tool for many distributed protocols. As shown by Cachin, Kursawe, and Shoup (PODC `00), schemes with unique signatures are of particular importance, as they allow to implement distributed coin flipping very efficiently and without any timing assumptions. This makes them an ideal building block for (inherently randomized) asynchronous consensus protocols. The threshold-BLS signature of Boldyreva (PKC `03) is both unique and very compact, but unfortunately lacks a security proof against adaptive adversaries. Thus, current consensus protocols either rely on less efficient alternatives or are not adaptively secure. In this work, we revisit the security of the threshold BLS signature by showing the following results, assuming $t$ adaptive corruptions:

- We give a modular security proof that follows a two-step approach: 1) We introduce a new security notion for distributed key generation protocols (DKG). We show that it is satisfied by several protocols that previously only had a static security proof. 2) Assuming any DKG protocol with this property, we then prove unforgeability of the threshold BLS scheme. Our reductions are tight and can be used to substantiate real-world parameter choices.

- To justify our use of strong assumptions such as the algebraic group model (AGM) and the hardness of one-more-discrete logarithm (OMDL), we prove two impossibility results: 1) Without the AGM, there is no tight security reduction from $(t+1)$-OMDL. 2) Even in the AGM, $(t+1)$-OMDL is the weakest assumption from which any (possibly loose) security reduction exists.
Expand
M. Rajululkahf
ePrint Report ePrint Report
This paper proposes Băhēm; a symmetric cipher that, when given a random-looking key k, a true random number generator (TRNG) and a cleartext message m to encrypt, no cryptanalysis can degrade its security below min[H(m), H(k)] bits of entropy, even under Grover's algorithm or even if it turned out that P = NP.

Aside from the cost of memory access and input/output processing, Băhēm requires only three additions (one per-session, two per-block) and one XOR operation in order to encrypt or decrypt, and is also highly parallelise-able.

Despite Băhēm's 1-bit overhead per cleartext bit, its early prototype, Alyal, achieved similar run-time speeds to OpenSSL's ChaCha20; slightly faster decryption, while slightly slower encryption when the TRNG was prepared in a file in advance. This demonstrates that Băhēm is practicality usable for many real-world application scenarios.

Later implementations, with better TRNG optimisations and parallelism, must allow the prototype a faster run-time for both, encryption and decryption.
Expand
Joon-Woo Lee, Eunsang Lee, Young-Sik Kim, Jong-Seon No
ePrint Report ePrint Report
In the artificial intelligence as a service (AIaaS) system in the client-server model, where the clients provide the data on the cloud and the server processes the data by using the deep neural network in the cloud, data privacy via homomorphic encryption is getting more important. Brakerski/Fan-Vercauteran (BFV) and Cheon-Kim-Kim-Song (CKKS) schemes are two representative homomorphic encryption schemes which support various arithmetic operations for encrypted data in the single-instruction multiple-data (SIMD) manner. As the homomorphic operations in these schemes are performed component-wisely for encrypted message vectors, the rotation operations for various cyclic shifts of the encrypted message vector are required for useful advanced operations such as bootstrapping, matrix multiplication, and convolution in convolutional neural networks. Since the rotation operation requires different Galois keys for different cyclic shifts, the servers using the conventional BFV and CKKS schemes should ask the clients having their secret keys to generate and send all of the required Galois keys. In particular, in the advanced services that require rotation operations for many cyclic shifts such as deep convolutional neural networks, the total Galois key size can be hundreds of gigabytes. It imposes substantial burdens on the clients in the computation and communication cost aspects. In this paper, we propose a new concept of \emph{hierarchical Galois key generation method} for homomorphic encryption to reduce the burdens of the clients and the server running BFV and CKKS schemes. The main concept in the proposed method is the hierarchical Galois keys, such that after the client generates and transmits a few Galois keys in the highest key level to the server, the server can generate any required Galois keys from the public key and the smaller set of Galois keys in the higher key level. This proposed method significantly reduces the number of the clients' operations for Galois key generation and the communication cost for the Galois key transmission. Since the server can generate the required Galois keys by using the received small set of Galois keys from the client, the server does not need to request additional Galois keys to the clients or to store all possible Galois keys for future use. For example, if we implement the standard ResNet-20 network for the CIFAR-10 dataset and the ResNet-18 network for the ImageNet dataset with pre-trained parameters of the CKKS scheme with the polynomial modulus degree $N=2^{16}$ and $N=2^{17}$, respectively, the server requires 265 and 617 Galois keys, which occupy 105.6GB and 197.6GB of memory, respectively. If we use the proposed three-level hierarchical Galois key system, the Galois key size generated and transmitted by the client can be reduced from 105.6GB to 3.4GB for ResNet-20 model for CIFAR-10, and reduced from 197.6GB to 3.9GB for ResNet-18 model for ImageNet.
Expand
Norica Băcuieți, Joan Daemen, Seth Hoffert, Gilles Van Assche, Ronny Van Keer
ePrint Report ePrint Report
Currently, a vast majority of symmetric-key cryptographic schemes are built as block cipher modes. The block cipher is designed to be hard to distinguish from a random permutation and this is supported by cryptanalysis, while (good) modes can be proven secure if a random permutation takes the place of the block cipher. As such, block ciphers form an abstraction level that marks the border between cryptanalysis and security proofs. In this paper, we investigate a re-factored version of symmetric-key cryptography built not around the block ciphers but rather the deck function: a keyed function with arbitrary input and output length and incrementality properties. This allows for modes of use that are simpler to analyze and still very efficient thanks to the excellent performance of currently proposed deck functions. We focus on authenticated encryption modes with varying levels of robustness. Our modes have built-in support for sessions, but are also efficienty without them. As a by-product, we define a new ideal model for authenticated encryption dubbed the jammin cipher. Unlike the OAE2 security models, the jammin cipher is both a operational ideal scheme and a security reference, and addresses real-world use cases such as bi directional communication and multi-key security.
Expand
Malik Imran, Felipe Almeida, Andrea Basso, Sujoy Sinha Roy, Samuel Pagliarini
ePrint Report ePrint Report
Quantum computers will break cryptographic primitives that are based on integer factorization and discrete logarithm problems. SABER is a key agreement scheme based on the Learning With Rounding problem that is quantum-safe, i.e., resistant to quantum computer attacks. This article presents a high-speed silicon implementation of SABER in a 65nm technology as an Application Specific Integrated Circuit. The chip measures 1$mm^2$ in size and can operate at a maximum frequency of 715$MHz$ at a nominal supply voltage of 1.2V. Our chip takes 10$\mu s$, 9.9$\mu s$ and 13$\mu s$ for the computation of key generation, encapsulation, and decapsulation operations of SABER. The average power consumption of the chip is 153.6$mW$. Physical measurements reveal that our design is 8.96x (for key generation), 11.80x (for encapsulation), and 11.23x (for decapsulation) faster than the best known silicon-proven SABER implementation.
Expand
Diego Aranha, Chuanwei Lin, Claudio Orlandi, Mark Simkin
ePrint Report ePrint Report
Private set-intersection (PSI) is one of the most practically relevant special-purpose secure multiparty computation tasks, as it is motivated by many real-world applications. In this paper we present a new private set-intersection protocol which is laconic, meaning that the protocol only has two rounds and that the first message is independent of the set sizes. Laconic PSI can be useful in applications, where servers with large sets would like to learn the intersection of their set with smaller sets owned by resource-constrained clients and where multiple rounds of interactions are not possible.

Previously, practically relevant laconic PSI protocols were only known from factoring-type assumptions. The contributions of this work are twofold: 1) We present the first laconic PSI protocol based on assumptions over pairing-friendly elliptic curves; and 2) For the first time we provide empirical evaluation of any laconic PSI protocol by carefully implementing and optimising both our and previous protocols. Our experimental results shows that our protocol outperforms prior laconic PSI protocols.
Expand
Marzio Mula, Nadir Murru, Federico Pintore
ePrint Report ePrint Report
We consider the problem of uniformly sampling supersingular elliptic curves over finite fields of cryptographic size (SRS problem). The currently best-known method combines the reduction of a suitable CM $j$-invariant and a random walk over some isogeny graph. Unfortunately, this method is not suitable for cryptographic applications because it leaks too much information about the endomorphism ring of the generated curve. This fact motivates a stricter version of the SRS problem, requiring that the sampling algorithm gives no extra information about the endomorphism ring of the output curve (cSRS problem). The known cSRS algorithms work only for small finite fields, since they involve the computation of polynomials of large degree. In this work we formally define the SRS and cSRS problems, we discuss the relevance of cSRS for cryptographic applications, and we provide a self-contained survey of the known approaches to both the problems. Afterwards, we describe and analyse some alternative techniques, based either on Hasse invariant or division polynomials, and we explain the reasons why these techniques do not readily lead to efficient cSRS algorithms.
Expand
Jungmin Park, N. Nalla Anandakumar, Dipayan Saha, Dhwani Mehta, Nitin Pundir, Fahim Rahman, Farimah Farahmandi, Mark M. Tehranipoor
ePrint Report ePrint Report
Research in post-quantum cryptography (PQC) aims to develop cryptographic algorithms that can withstand classical and quantum attacks. The recent advance in the PQC field has gradually switched from the theory to the implementation of cryptographic algorithms on hardware platforms. In addition, the PQC standardization process of the National Institute of Standards and Technology (NIST) is currently in its third round. It specifies ease of protection against side-channel analysis (SCA) as an essential selection criterion. Following this trend, in this paper, we evaluate side-channel leakages of existing PQC implementations using PQC-SEP, a completely automated side-channel evaluation platform at both pre-and post-silicon levels. It automatically estimates the amount of side-channel leakage in the power profile of a PQC design at early design stages, i.e., RTL, gate level, and physical layout level. It also efficiently validates side-channel leakages at the post-silicon level against artificial intelligence (AI) based SCA models and traditional SCA models. Further, we delineate challenges and approaches for future research directions.
Expand
Fuchun Guo, Willy Susilo
ePrint Report ePrint Report
Unique signatures are digital signatures with exactly one unique and valid signature for each message. The security reduction for most unique signatures has a natural reduction loss (in the existentially unforgeable against chosen-message attacks, namely EUF-CMA, security model under a non-interactive hardness assumption). In Crypto 2017, Guo {\it et al.} proposed a particular chain-based unique signature scheme where each unique signature is composed of $n$ BLS signatures computed sequentially like a blockchain. Under the computational Diffie-Hellman assumption, their reduction loss is $n\cdot q_H^{1/n}$ for $q_H$ hash queries and it is logarithmically tight when $n=\log{q_H}$. However, it is currently unknown whether a better reduction than logarithmical tightness for the chain-based unique signatures exists.

We show that the proposed chain-based unique signature scheme by Guo {\it et al.} must have the reduction loss $q^{1/n}$ for $q$ signature queries when each unique signature consists of $n$ BLS signatures. We use a meta reduction to prove this lower bound in the EUF-CMA security model under any non-interactive hardness assumption, and the meta-reduction is also applicable in the random oracle model. We also give a security reduction with reduction loss $4\cdot q^{1/n}$ for the chain-based unique signature scheme (in the EUF-CMA security model under the CDH assumption). This improves significantly on previous reduction loss $n\cdot q_H^{1/n}$ that is logarithmically tight at most. The core of our reduction idea is a {\em non-uniform} simulation that is specially invented for the chain-based unique signature construction.
Expand
Elena Kirshanova, Alexander May
ePrint Report ePrint Report
We consider the McEliece cryptosystem with a binary Goppa code $C \subset \mathbb{F}_2^n$ specified by an irreducible Goppa polynomial $g(x) \in \mathbb{F}_{2^m}[x]$ and Goppa points $(\alpha_1, \ldots, \alpha_n) \in \mathbb{F}_{2^m}^n$. Since $g(x)$ together with the Goppa points allow for efficient decoding, these parameters form McEliece secret keys. Such a Goppa code $C$ is an $(n-tm)$-dimensional subspace of $\mathbb{F}_2^n$, and therefore $C$ has co-dimension $tm$. For typical McEliece instantiations we have $tm \approx \frac n 4$.

We show that given more than $tm$ entries of the Goppa point vector $(\alpha_1, \ldots, \alpha_n)$ allows to recover the Goppa polynomial $g(x)$ and the remaining entries in polynomial time. Hence, in case $tm \approx \frac n 4$ roughly a fourth of a McEliece secret key is sufficient to recover the full key efficiently.

Let us give some illustrative numerical examples. For ClassicMcEliece with $(n,t,m)=(3488,64,12)$ on input $64\cdot 12+1=769$ Goppa points, we recover the remaining $3488-769=2719$ Goppa points in $\mathbb{F}_{2^{12}}$ and the degree-$64$ Goppa polynomial $g(x) \in \mathbb{F}_{2^{12}}[x]$ in $1$ minute.

For ClassicMcEliece with $(n,t,m)=(8192,128,13)$ on input $128\cdot 13+1=1665$ Goppa points, we recover the remaining $8192-1665=6529$ Goppa points in $\mathbb{F}_{2^{13}}$ and the degree-$128$ Goppa polynomial $g(x) \in \mathbb{F}_{2^{13}}[x]$ in $5$ minutes.

Our results also extend to the case of erroneous Goppa points, but in this case our algorithms are no longer polynomial time.
Expand
◄ Previous Next ►