IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
12 October 2025
Willy Quach, LaKyah Tyner, Daniel Wichs
Non-interactive zero-knowledge (NIZK) proofs tend to be randomized and there are many possible proofs for any fixed NP statement. Can we have NIZKs with only a single unique valid proof per statement? Such NIZKs are known under strong cryptographic assumptions (indistinguishability obfuscation), and are conversely known to require strong cryptographic assumptions (witness encryption). In this work, following Lepinski, Micali, and shelat (TCC '05), we consider the following relaxed notion of unique NIZKs (UNIZKs):
- We only require (computationally) unique proofs for NP statements with a (computationally) unique witness; an adversary that can produce two distinct proofs must also know two distinct witnesses.
- We consider NIZKs with prover setup, where a potentially malicious prover initially publishes a public key $\mathsf{pk}$ and keeps a corresponding secret key $\mathsf{sk}$, which it uses to produce arbitrarily many NIZK proofs $\pi$ in the future. While the public key $\mathsf{pk}$ is not required to be unique, once it is fixed, all the subsequent proofs $\pi$ that the prover can produce should be unique.
We show that both of these relaxations are needed to avoid witness encryption. Prior work constructed such UNIZKs under the quadratic residuosity assumption, and it remained an open problem to do so under any other assumptions. Here, we give a new construction of UNIZKs under the learning with errors (LWE) assumption. We also identify and fix a subtle circularity issue in the prior work.
UNIZKs are a non-interactive version of steganography-free zero-knowledge of Abdolmaleki et al. (TCC '22). As an application of UNIZKs, we get a general steganography detection mechanism that can passively monitor arbitrary functionalities to detect steganographic leakage.
Tianyu Zhang, Yupeng Ouyang, Yupeng Zhang
In recent years, numerous new and more efficient constructions of zero-knowledge succinct non-interactive argument of knowledge (zkSNARK) have been proposed, motivated by their growing practical applications. However, in most schemes, when the witness is changed, the prover has to recompute the proof from scratch even if the new witness is close to the old one. This is inefficient for applications where proofs are generated for dynamically changing witnesses with small changes.
In this paper, we introduce DYNARK, a dynamic zkSNARK scheme that can update the proof in sublinear time when the change of the witness is small. DYNARK is built on top of the seminal zkSNARK protocol of Groth, 2016. In the semi-dynamic setting, for an R1CS of size $n$, after a preprocessing of $O(n\log n)$ group operations on the original witness, it only takes $O(d)$ group operations and $O(d\log^2 d)$ field operations to update the proof for a new witness with distance $d$ from the original witness, which is nearly optimal. In the fully-dynamic setting, the update time of DYNARK is $O(d\sqrt{n\log n})$ group operations and $O(d\log^2 d)$ field operations. Both the proof size and the verifier time are $O(1)$, which are exactly the same as Groth16. Compared to the scheme in a prior work by Wang et al. 2024, we reduce the proof size from $O(\sqrt{n})$ to $O(1)$ without relying on pairing product arguments or another zkSNARK, and the update time and the verifier time of DYNARK are faster in practice.
Experimental results show that for $n=2^{20}$, after a one-time preprocessing of 74.3 seconds, it merely takes 3 milliseconds to update the proof in our semi-dynamic zkSNARK for $d=1$, and 60 milliseconds to update the proof in our fully-dynamic zkSNARK. These are 1433$\times$ and 73$\times$ faster than Groth16, respectively. The proof size is 192 bytes and the verifier time is 4.4 milliseconds. The system is fully compatible with any existing deployment of Groth16 without changing the trusted setup, the proof and the verification algorithm.
In this paper, we introduce DYNARK, a dynamic zkSNARK scheme that can update the proof in sublinear time when the change of the witness is small. DYNARK is built on top of the seminal zkSNARK protocol of Groth, 2016. In the semi-dynamic setting, for an R1CS of size $n$, after a preprocessing of $O(n\log n)$ group operations on the original witness, it only takes $O(d)$ group operations and $O(d\log^2 d)$ field operations to update the proof for a new witness with distance $d$ from the original witness, which is nearly optimal. In the fully-dynamic setting, the update time of DYNARK is $O(d\sqrt{n\log n})$ group operations and $O(d\log^2 d)$ field operations. Both the proof size and the verifier time are $O(1)$, which are exactly the same as Groth16. Compared to the scheme in a prior work by Wang et al. 2024, we reduce the proof size from $O(\sqrt{n})$ to $O(1)$ without relying on pairing product arguments or another zkSNARK, and the update time and the verifier time of DYNARK are faster in practice.
Experimental results show that for $n=2^{20}$, after a one-time preprocessing of 74.3 seconds, it merely takes 3 milliseconds to update the proof in our semi-dynamic zkSNARK for $d=1$, and 60 milliseconds to update the proof in our fully-dynamic zkSNARK. These are 1433$\times$ and 73$\times$ faster than Groth16, respectively. The proof size is 192 bytes and the verifier time is 4.4 milliseconds. The system is fully compatible with any existing deployment of Groth16 without changing the trusted setup, the proof and the verification algorithm.
Carlo Brunetta, Amit Chaudhary, Stefano Galatolo, Massimiliano Sala
In this short paper we present an approach to computable contracts, where all roles in a computation may be outsourced, from the servers performing computations, to those providing input, to those performing verifications (on input and on output), including all related communications. Varying levels of confidentiality can be chosen, both on data and calculations.
While the largest part of the computational and communication effort is performed off-chain, our contracts require a specialized underlying blockchain, where they are encoded as transactions, to achieve their decentralized handling and thus enforcing their correct execution via a combination of cryptographic techniques and economic security.
Our delegation architecture allows for the execution of very complex collaborative tasks, such as the deployment of an AI marketplace.
Vladimir Sarde, Nicolas Debande
MQOM is one of the fourteen remaining candidates in the
second round of the NIST post-quantum signature standardization process.
Introduced in 2023, MQOM instantiates the Multi-Party Computation
in the Head (MPCitH) paradigm over the well-established hard
problem of solving Multivariate Quadratic (MQ) equations. In this paper,
we present the first fault attacks on MQOM targeting the MQ evaluation
phase, which is a central component of the algorithm. We introduce
four differential fault attacks and demonstrate their effectiveness against
both unprotected and masked implementations. The first two target the
secret key using a random fault model, making them particularly realistic
and practical. With as little as one or two injected faults, depending
on the variant, the entire secret key can be recovered through linear algebra.
The other two attacks exploit faults on the coefficients of the MQ
system directly. Our results highlight that the MQ evaluation, despite
not being identified as a sensitive component until now, can be exploited
using just a few fault injections.
Paul Gerhart, Davide Li Calsi, Luigi Russo, Dominique Schröder
We introduce Bounded-Equivocable PRFs, a new variant of pseudorandom functions. They combine standard pseudorandomness with a bounded form of programmability. In our model, an adversary may issue an arbitrary number of queries that remain indistinguishable from random. Bounded equivocability ensures that responses can be programmed consistently with a later-revealed key, up to a fixed bound q. This relaxation avoids known impossibility results, which preclude polynomial unbounded equivocability in the standard model, while preserving the programmability required for applications.
We present standard-model constructions of bounded-equivocable PRFs under the DDH and LWE assumptions, and we show how to make these constructions verifiable. Prior SIM-AC style primitives could not achieve verifiability since their programmability relied on embedding the secret key into the random oracle.
We demonstrate applications to (i) adaptively secure private-key encryption, (ii) two-round threshold Schnorr signatures secure against adaptive corruptions, and (iii) leader election in Proof of Stake blockchains. Together, these results establish bounded-equivocable PRFs as a practical primitive that achieves programmability with verifiability in the standard model, and enables applications previously out of reach.
We present standard-model constructions of bounded-equivocable PRFs under the DDH and LWE assumptions, and we show how to make these constructions verifiable. Prior SIM-AC style primitives could not achieve verifiability since their programmability relied on embedding the secret key into the random oracle.
We demonstrate applications to (i) adaptively secure private-key encryption, (ii) two-round threshold Schnorr signatures secure against adaptive corruptions, and (iii) leader election in Proof of Stake blockchains. Together, these results establish bounded-equivocable PRFs as a practical primitive that achieves programmability with verifiability in the standard model, and enables applications previously out of reach.
Lorenzo Grassi, Dmitry Khovratovic, Katharina Koschatko, Christian Rechberger, Markus Schofnegger, Verena Schröppel
We present Poseidon2b, a version of Poseidon2 defined over binary extension fields. It is specifically designed to inherit many of the circuit-friendly properties of its prime field version, and to be used together with binary extension field proving systems such as Binius. Benchmarking demonstrates the merits around proof size, proving time, and especially verification time.
We also revisit recent attacks on Poseidon and Poseidon2 and discuss their applicability in the binary field extension setting, in addition to analyzing attack vectors that were not applicable in the prime field setting. In particular, we lay special focus on algebraic cryptanalysis and subspace trails, techniques which resulted in attacks on initial versions of Poseidon defined over binary extension fields.
We also revisit recent attacks on Poseidon and Poseidon2 and discuss their applicability in the binary field extension setting, in addition to analyzing attack vectors that were not applicable in the prime field setting. In particular, we lay special focus on algebraic cryptanalysis and subspace trails, techniques which resulted in attacks on initial versions of Poseidon defined over binary extension fields.
Deokhwa Hong, Yongwoo Lee
FHEW-like homomorphic encryption (HE) schemes, introduced by Ducas and Micciancio (Eurocrypt 2015), represent the most efficient family of HE schemes in terms of both latency and key size.
However, their bootstrapping noise is highly sensitive to parameter selection, leaving only a sparse set of feasible parameters.
Because bootstrapping noise directly affects security and performance, existing approaches tend to choose parameters that drive noise excessively low—resulting in large key sizes and high latency.
In this paper, we propose a new bootstrapping modification that permits an almost continuous spectrum of parameter choices.
In our best knowledge, this is the first practical HE scheme for which the evaluation failure probability is precisely determined without requiring any information about the message distribution.
We further show that, under our method, the parameter‐optimization task reduces to a generalized knapsack problem solvable in polynomial time.
As a result, the traditionally cumbersome process of selecting parameters for FHEW‐like schemes becomes tractable.
Experimental results show that our method improves bootstrapping runtime by approximately 17% and reduces key size by about 45%.
Rutchathon Chairattana-Apirom, Stefano Tessaro, Nirvan Tyagi
Privacy-preserving advertisement attribution allows websites selling goods to learn statistics on which advertisement campaigns can be attributed to converting sales. Existing proposals rely on users to locally store advertisement history on their browser and report attribution measurements to an aggregation service (instantiated with multiparty computation over non-colluding servers). The service computes and reveals the aggregate statistic. The service hides individual user contributions, but it does not guarantee integrity against misbehaving users that may submit fraudulent measurements.
Our work proposes a new cryptographic primitive, "secret share attestation", in which secret shares input into a multiparty computation protocol are accompanied by an attestation of integrity by a third party: advertisers include signature attestations when serving ads that are later included in contributed measurements. We propose two constructions based on the standards-track BBS signatures and efficient signatures over equivalence classes, respectively. We implement and evaluate our protocols in the context of the advertising application to demonstrate their practicality.
Our work proposes a new cryptographic primitive, "secret share attestation", in which secret shares input into a multiparty computation protocol are accompanied by an attestation of integrity by a third party: advertisers include signature attestations when serving ads that are later included in contributed measurements. We propose two constructions based on the standards-track BBS signatures and efficient signatures over equivalence classes, respectively. We implement and evaluate our protocols in the context of the advertising application to demonstrate their practicality.
11 October 2025
Jung Hee Cheon, Daehyun Jang
Verifiable Homomorphic Encryption (VHE) is a cryptographic technique that integrates Homomorphic Encryption (HE) with Verifiable Computation (VC). It serves as a crucial technology for ensuring both privacy and integrity in outsourced computation, where a client sends input ciphertexts $\mathsf{ct}$ and a function $f$ to a server and verifies the correctness of the evaluation upon receiving the evaluation result $f(\mathsf{ct})$ from the server.
Chatel et al. (CCS'24) introduced two VHE schemes: Replication Encoding (REP) and Polynomial Encoding (PE). A similar approach to REP was used by Albrecht et al. (EUROCRYPT'24) to develop a Verifiable Oblivious PRF scheme (vADDG).
A key approach in these schemes is to embed specific secret information within ciphertexts and use them to verify homomorphic evaluations.
This paper presents efficient forgery attacks against the verifiability guarantees of these VHE schemes. We introduce two attack strategies. The first targets REP and vADDG, extracting secret information in encrypted form from input ciphertexts and leveraging it to forge output ciphertexts without being detected by the verification algorithm. The second targets PE, exploiting its secret embedding structure to forge output ciphertexts that remain valid on input values for verification, yet violate the verifiability property.
Our forgery attack on vADDG demonstrates that the proposed 80-bit security parameters provide at most 10 bits of concrete security. Our attack on REP and \PE achieves a probability 1 attack with linear time complexity when using fully homomorphic encryption.
Prabhanjan Ananth, John Bostanci, Aditya Gulati, Yao-Ting Lin
Gluing theorem for random unitaries [Schuster, Haferkamp, Huang, QIP 2025] have found numerous applications, including designing low depth random unitaries [Schuster, Haferkamp, Huang, QIP 2025], random unitaries in $\mathsf{QAC0}$ [Foxman, Parham, Vasconcelos, Yuen'25] and generically shortening the key length of pseudorandom unitaries [Ananth, Bostanci, Gulati, Lin EUROCRYPT'25]. We present an alternate method of combining Haar random unitaries from the gluing lemma from [Schuster, Haferkamp, Huang, QIP 2025] that is secure against adversaries with inverse query access to the joined unitary. As a consequence, we show for the first time that strong pseudorandom unitaries can generically have their length extended, and can be constructed using only $O(n^{1/c})$ bits of randomness, for any constant $c$, if any family of strong pseudorandom unitaries exists.
Frank Denis
Format-preserving encryption (FPE) enables encryption while maintaining syntactic properties such as character sets. The current NIST standard FF1 uses multi-round Feistel networks that sacrifice performance for flexibility, while FF3-1 was withdrawn in 2025 following successful cryptanalytic attacks. FAST, proposed as a faster alternative, has not been widely implemented due to its complexity, leaving limited practical alternatives.
We present HCTR2-FP and HCTR3-FP, format-preserving adaptations of the HCTR2 and HCTR3 wide-block tweakable ciphers.
These variants preserve the single-pass Hash-Encrypt-Hash structure while operating on arbitrary radix domains through base-radix encoding and modular arithmetic. The constructions are simple to implement and analyze, and benchmarks demonstrate significant speedup over FF1.
We present HCTR2-FP and HCTR3-FP, format-preserving adaptations of the HCTR2 and HCTR3 wide-block tweakable ciphers.
These variants preserve the single-pass Hash-Encrypt-Hash structure while operating on arbitrary radix domains through base-radix encoding and modular arithmetic. The constructions are simple to implement and analyze, and benchmarks demonstrate significant speedup over FF1.
Gregory D. Kahanamoku-Meyer, Seyoon Ragavan, Katherine Van Kirk
"Pebble games," an abstraction from classical reversible computing, have found use in the design of quantum circuits for inherently sequential tasks. Gidney showed that allowing Hadamard basis measurements during pebble games can dramatically improve costs---an extension termed "spooky pebble games" because the measurements leave temporary phase errors called ghosts. In this work, we define and study parallel spooky pebble games. Previous work by Blocki, Holman, and Lee (TCC 2022) and Gidney studied the benefits offered by either parallelism or spookiness individually; here we show that these resources can yield impressive gains when used together. First, we show by construction that a line graph of length $\ell$ can be pebbled in depth $2\ell$ (which is exactly optimal) using space $\leq 2.47\log \ell$. Then, to explore pebbling schemes using even less space, we use a highly optimized $A^*$ search implemented in Julia to find the lowest-depth parallel spooky pebbling possible for a range of concrete line graph lengths $\ell$ given a constant number of pebbles $s$.
We show that these techniques can be applied to Regev's factoring algorithm (Journal of the ACM 2025) to significantly reduce the cost of its arithmetic. For example, we find that 4096-bit integers $N$ can be factored in multiplication depth 193, which outperforms the 680 required of previous variants of Regev and the 444 reported by Ekerå and Gärtner for Shor's algorithm (IACR Communications in Cryptology 2025). While space-optimized implementations of Shor's algorithm remain likely the best candidates for first quantum factorization of large integers, our results show that Regev's algorithm may have practical importance in the future, especially given the possibility of further optimization. Finally, we believe our pebbling techniques will find applications in quantum cryptanalysis beyond integer factorization.
We show that these techniques can be applied to Regev's factoring algorithm (Journal of the ACM 2025) to significantly reduce the cost of its arithmetic. For example, we find that 4096-bit integers $N$ can be factored in multiplication depth 193, which outperforms the 680 required of previous variants of Regev and the 444 reported by Ekerå and Gärtner for Shor's algorithm (IACR Communications in Cryptology 2025). While space-optimized implementations of Shor's algorithm remain likely the best candidates for first quantum factorization of large integers, our results show that Regev's algorithm may have practical importance in the future, especially given the possibility of further optimization. Finally, we believe our pebbling techniques will find applications in quantum cryptanalysis beyond integer factorization.
Michael Klooß, Russell W. F. Lai, Michael Reichle
Blind signatures are an important tool for privacy-preserving applications with a long history dating back to Chaum's seminal work in Crypto'82. In this work, we focus on the Fiat-Shamir paradigm, i.e., blind signatures based on $\Sigma$-protocols compiled via Fiat-Shamir, in the random oracle model. We resolve the following open problems:
- We give the first lattice-based blind signature that is concurrently-secure based on the Fiat-Shamir paradigm. - We give the first pairing-free blind signature that is concurrently-secure under the discrete logarithm assumption (without the algebraic group model).
On a technical level, our work is inspired by the recent proofs of inequality technique (Klooß and Reichle, Crypto'25). This technique relies on statistical puncturing of the verification key. We explore the technique in the computational regime and develop new proof and design techniques to tackle the challenges encountered along the way.
- We give the first lattice-based blind signature that is concurrently-secure based on the Fiat-Shamir paradigm. - We give the first pairing-free blind signature that is concurrently-secure under the discrete logarithm assumption (without the algebraic group model).
On a technical level, our work is inspired by the recent proofs of inequality technique (Klooß and Reichle, Crypto'25). This technique relies on statistical puncturing of the verification key. We explore the technique in the computational regime and develop new proof and design techniques to tackle the challenges encountered along the way.
Sönke Jendral, Elena Dubrova, Qian Guo, Thomas Johansson
Recognising the need for PQC signature schemes with different size and performance trade-offs than the ML-DSA and SLH-DSA standards, in 2023 NIST launched a competition for additional signature algorithms. Among the current candidates in this competition is CROSS, a code-based scheme derived from the syndrome-decoding problem and suitable for memory-constrained devices.
This paper presents a fault attack on CROSS that recovers the secret key by flipping one or more bits in the scheme’s public parity-check matrix. Unlike previous PQC fault attacks that typically rely on precisely controlled fault injections, which is often an unrealistic assumption, our approach exploits bit flips with unknown position and value, resembling the Rowhammer fault model. The attack builds upon the correction-based methodology introduced for Dilithium (Euro S&P’22; CHES’24) and exploits structural properties of CROSS to substantially relax attacker requirements. We demonstrate the attack on an ARM Cortex-M4 processor using voltage fault injection. We further show that prior work on partial key exposure attacks (CRYPTO'22) can be extended to CROSS under non-trivial erasure rates, reducing the attack complexity. The attack remains effective in the presence of memory-integrity protection mechanisms such as error-correcting codes. Finally, we propose countermeasures for hardening CROSS implementations against physical attacks.
Sonia Belaïd, Gaëtan Cassiers
Cryptographic implementations are inherently vulnerable to side-channel attacks, which exploit physical leakages such as power consumption. Masking has become the most widely adopted countermeasure to mitigate these threats, as it randomizes intermediate values and makes the leakage less exploitable. Yet, a central challenge remains: how to rigorously assess the concrete security level of masked implementations.
To tackle this issue, the random probing model has emerged as a powerful abstraction. It formalizes leakage as random probes in the circuit and, importantly, the security in the noisy leakage model, which closely reflects the behavior of real embedded devices, reduces to security in the random probing model. Hence, proving security in the random probing model provides sound guarantees of practical resistance against side-channel attacks.
Yet, the current state of the art on random probing compilers and verifiers suffers from a clear limitation: scalable approaches yield prohibitively large and inefficient circuits, while tighter methods do not scale to practical circuit sizes. In this work, we bridge this gap by introducing a new methodology that directly estimates the random probing security of large circuits through Monte Carlo sampling, combined with a pruning strategy that drastically reduces the sampling space.
We implement our approach in a new tool, PERSEUS, which supports both gate and wire leakage models. Our experiments demonstrate that PERSEUS can efficiently evaluate masked implementations of AES-128 with $n=8$ shares, achieving security levels beyond 32 bits, thereby significantly advancing the state of the art in practical verification of side-channel countermeasures.
Valerie Gilchrist, Laurane Marco, Christophe Petit, Gang Tang
The Linear Code Equivalence (LCE) problem and the Matrix Code Equivalence (MCE) problem are two examples of code-based hard problems that have gained attention as candidates for use in post-quantum cryptography.
They are straightforward to implement, can be viewed as group actions, and offer a good trade-off between compactness and performance in the realm of post-quantum group actions.
With the community gaining confidence in the security of these problems, new variants of these problems have been introduced to achieve particular functionalities in advanced protocols or efficiency improvements. A natural question is then whether the problem variants are as secure as the original ones.
In this work, we consider three problem variants of LCE or MCE. We first consider a variant based on LCE, and reduce it to the original LCE assumption. This increases security confidence in a blind signature scheme, proposed by Duong, Khuc, Qiao, Susilo and Zhang that relied on it. Second, we analyse an MCE variant, MIMCE, proposed in the context of another blind signature scheme, by Kutcha, Legrow and Persichetti, and show that the parameters proposed are not sufficient to reach the claimed bit security. Finally, we consider a multi-sample version of MIMCE which we solve in polynomial time.
In this work, we consider three problem variants of LCE or MCE. We first consider a variant based on LCE, and reduce it to the original LCE assumption. This increases security confidence in a blind signature scheme, proposed by Duong, Khuc, Qiao, Susilo and Zhang that relied on it. Second, we analyse an MCE variant, MIMCE, proposed in the context of another blind signature scheme, by Kutcha, Legrow and Persichetti, and show that the parameters proposed are not sufficient to reach the claimed bit security. Finally, we consider a multi-sample version of MIMCE which we solve in polynomial time.
Gaëtan Cassiers
Hardware Private Circuits (HPC) provide a compositional framework for designing masked hardware circuits, ensuring security against side-channel attacks in the robust probing model with glitches and transitions. There is however a gap between the simplified circuit models and real-world implementations. In particular, some parts are of real circuits are ignored in the simplified models. Further, designing implementations such that they have a simple static correspondence to the theory can be challenging (e.g., around the requirement of splitting the computation into gadgets), or even infeasible (e.g., when some hardware elements like memory are used for different purposes during the executions). In this work, we close this gap by introducing a new model for hardware circuits that include the control and randomness distribution logic. It also allows some computation on the shares to be performed outside the delimited gadgets. We introduce a new open-source compositional masking verification tool, MATCHI. Thanks to the formalization of a composition framework for the new circuit model, we prove that any circuit successfully verified by MATCHI is secure in the threshold probing model with glitches and transitions.
Noah Greene, Britta Hale
The development of quantum computing technology poses a serious and credible threat to modern public-key cryptography, which is a pillar of secure communications. Post quantum cryptographic (PQC) algorithms can protect against this threat, but key establishment protocols supporting PQC algorithms pose non-trivial overhead costs. Prior proposals have pointed to the use of strategic traditional/PQC protocol combinations as a means of balancing performance and security, namely as an amortization of PQC overhead. This work provides the first benchmarking of this method within the context of the Messaging Layer Security (MLS) protocol. Comparative metrics include group size, CPU cycles, bytes, and runtime. The results show substantial overhead savings across the board when compared to a simple post-quantum cipher suite use, and marginal increase over traditional cipher suite performance when amortized. At small group sizes such as 1-to-1 channels, the method performs comparably to MLS using a traditional cipher suite. This work shows viability of deploying PQC solutions for constrained settings and and achieving PQC security with efficiency.
Prabhanjan Ananth, Amit Behera, Zikuan Huang, Fuyuki Kitagawa, Takashi Yamakawa
Quantum copy-protection is a functionality-preserving compiler that transforms a classical program into an unclonable quantum program. This primitive has emerged as a foundational topic in quantum cryptography, with significant recent developments. However, characterizing the functionalities that can be copy-protected is still an active and ongoing research direction.
Assuming the existence of indistinguishability obfuscation and learning with errors, we show the existence of copy-protection for a variety of classes of functionalities, including puncturable cryptographic functionalities and subclasses of evasive functionalities. This strictly improves upon prior works, which were either based on the existence of heuristic assumptions [Ananth and Behera CRYPTO'24] or were based on the classical oracle model [Coladangelo and Gunn STOC'24]. Moreover, our constructions satisfy a new and much stronger security definition compared to the ones studied in the prior works. To design copy-protection, we follow the blueprint of constructing copy-protection from unclonable puncturable obfuscation (UPO) [Ananth and Behera CRYPTO'24] and present a new construction of UPO by leveraging the recently introduced techniques from [Kitagawa and Yamakawa TCC'25].
Assuming the existence of indistinguishability obfuscation and learning with errors, we show the existence of copy-protection for a variety of classes of functionalities, including puncturable cryptographic functionalities and subclasses of evasive functionalities. This strictly improves upon prior works, which were either based on the existence of heuristic assumptions [Ananth and Behera CRYPTO'24] or were based on the classical oracle model [Coladangelo and Gunn STOC'24]. Moreover, our constructions satisfy a new and much stronger security definition compared to the ones studied in the prior works. To design copy-protection, we follow the blueprint of constructing copy-protection from unclonable puncturable obfuscation (UPO) [Ananth and Behera CRYPTO'24] and present a new construction of UPO by leveraging the recently introduced techniques from [Kitagawa and Yamakawa TCC'25].
Thomas Debris-Alazard, Philippe Gaborit, Romaric Neveu, Olivier Ruatta
Introduced in 2003 and 2005, Alekhnovich and Regev' schemes were the first public-key encryptions whose security is only based on the average hardness of decoding random linear codes and LWE, without other security assumptions. Such security guarantees made them very popular, being at the origin of the now standardized HQC or Kyber.
We present an adaptation of Alekhnovich and Regev' encryption scheme whose security is only based on the hardness of a slight variation of MinRank, the so-called stationary-MinRank problem. We succeeded to reach this strong security guarantee by showing that stationary-MinRank benefits from a search-to-decision reduction. Our scheme therefore brings a partial answer to the long-standing open question of building an encryption scheme whose security relies solely on the hardness of MinRank.
Finally, we show after a thoroughly security analysis that our scheme is practical and competitive with other encryption schemes admitting such strong security guarantees. Our scheme is slightly less efficient than FrodoKEM, but much more efficient than Alekhnovich and Regev' original schemes, with possibilities of improvements by considering more structure, in the same way as HQC and Kyber.
We present an adaptation of Alekhnovich and Regev' encryption scheme whose security is only based on the hardness of a slight variation of MinRank, the so-called stationary-MinRank problem. We succeeded to reach this strong security guarantee by showing that stationary-MinRank benefits from a search-to-decision reduction. Our scheme therefore brings a partial answer to the long-standing open question of building an encryption scheme whose security relies solely on the hardness of MinRank.
Finally, we show after a thoroughly security analysis that our scheme is practical and competitive with other encryption schemes admitting such strong security guarantees. Our scheme is slightly less efficient than FrodoKEM, but much more efficient than Alekhnovich and Regev' original schemes, with possibilities of improvements by considering more structure, in the same way as HQC and Kyber.