IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
15 November 2021
SoK: Password-Authenticated Key Exchange -- Theory, Practice, Standardization and Real-World Lessons
Feng Hao, Paul C. van Oorschot
Password-authenticated key exchange (PAKE) is a major area of cryptographic protocol research and practice. Many PAKE proposals have emerged in the 30 years following the original 1992 Encrypted Key Exchange (EKE), some accompanied by new theoretical models to support rigorous analysis. To reduce confusion and encourage practical development, major standards bodies including IEEE, ISO/IEC and the IETF have worked towards standardizing PAKE schemes, with mixed results. Challenges have included contrasts between heuristic protocols and schemes with security proofs, and subtleties in the assumptions of such proofs rendering some schemes unsuitable for practice. Despite initial difficulty identifying suitable use cases, the past decade has seen PAKE adoption in numerous large-scale applications such as Wi-Fi, Apple's iCloud, browser synchronization, e-passports, and the Thread network protocol for Internet of Things devices. Given this backdrop, we consolidate three decades of knowledge on PAKE protocols, integrating theory, practice, standardization and real-world experience. We provide a thorough and systematic review of the field, a summary of the state-of-the-art, a taxonomy to categorize existing protocols, and a comparative analysis of protocol performance using representative schemes from each taxonomy category. We also review real-world applications, summarize lessons learned, and highlight open research problems related to PAKE protocols.
Luca Notarnicola, Gabor Wiese
We consider the problem of revealing a small hidden lattice from the knowledge of a low-rank sublattice modulo a given sufficiently large integer – the Hidden Lattice Problem. A central motivation of study for this problem is the Hidden Subset Sum Problem, whose hardness is essentially determined by that of the hidden lattice problem. We describe and compare two algorithms for the hidden lattice problem: we first adapt the algorithm by Nguyen and Stern for the hidden subset sum problem, based on orthogonal lattices, and propose a new variant, which we explain to be related by duality in lattice theory. Following heuristic, rigorous and practical analyses, we find that our new algorithm brings some advantages as well as a competitive al- ternative for algorithms for problems with cryptographic interest, such as Approximate Common Divisor Problems, and the Hidden Subset Sum Problem. Finally, we study variations of the problem and highlight its relevance to cryptanalysis.
Erik Anderson, Melissa Chase, F. Betul Durak, Esha Ghosh, Kim Laine, Chenkai Weng
We introduce a secure histogram aggregates method which is suitable for many applications such as ad conversion measurements. Our solution relies on three-party computation with linear complexity and guarantees differentially private histogram outputs. We formally analyse the security and privacy of our method and compare it with existing proposals. Finally, we conclude our report with a performance analysis.
Kotaro Abe, Makoto Ikeda
Lattice attacks are threats to (EC)DSA and have been used in cryptanalysis. In lattice attacks, a few bits of nonce leaks in multiple signatures are sufficient to recover the secret key. Currently, the BKZ algorithm is frequently used as a lattice reduction algorithm for lattice attacks, and there are many reports on the conditions for successful attacks. However, experimental attacks using the BKZ algorithm have only shown results for specific key lengths, and it is not clear how the conditions change as the key length changes. In this study, we conducted some experiments to simulate lattice attacks on P256, P384, and P521 and confirmed that attacks on P256 with 3 bits nonce leak, P384 with 4 bits nonce leak, and P521 with 5 bits nonce leak are feasible. The result for P521 is a new record. We also investigated in detail the reasons for the failure of the attacks and proposed a model to estimate the feasibility of lattice attacks using the BKZ algorithm. We believe that this model can be used to estimate the effectiveness of lattice attacks when the key length is changed.
Maria Corte-Real Santos, Craig Costello, Jia Shi
We give a new algorithm for finding an isogeny from a given supersingular elliptic curve $E/\mathbb{F}_{p^2}$ to a subfield elliptic curve $E'/\mathbb{F}_p$, which is the bottleneck step of the Delfs-Galbraith algorithm for the general supersingular isogeny problem. Our core ingredient is a novel method of rapidly determining whether a polynomial $f \in L[X]$ has any roots in a subfield $K \subset L$, while crucially avoiding expensive root-finding algorithms. In the special case when $f=\Phi_{\ell,p}(X,j) \in \mathbb{F}_{p^2}[X]$, i.e. when $f$ is the $\ell$-th modular polynomial evaluated at a supersingular $j$-invariant, this provides a means of efficiently determining whether there is an $\ell$-isogeny connecting the corresponding elliptic curve to a subfield curve. Together with the traditional Delfs-Galbraith walk, inspecting many $\ell$-isogenous neighbours in this way allows us to search through a larger proportion of the supersingular set per unit of time. Though the asymptotic $\tilde{O}(p^{1/2})$ complexity of our improved algorithm remains unchanged from that of the original Delfs-Galbraith algorithm, our theoretical analysis and practical implementation both show a significant reduction in the runtime of the subfield search. This sheds new light on the concrete hardness of the general supersingular isogeny problem, the foundational problem underlying isogeny-based cryptography.
Ghada Arfaoui, Pierre-Alain Fouque, Thibaut Jacques, Pascal Lafourcade, Adina Nedelcu, Cristina Onete, Léo Robert
Deep attestation is a particular case of remote attestation, i.e.,
verifying the integrity of a platform with a remote verification
server. We focus on the remote attestation of hypervisors and their
hosted virtual machines (VM), for which two solutions are currently
supported by ETSI. The first is single-channel attestation, requiring
for each VM an attestation of that VM and the underlying hypervisor
through the physical TPM. The second, multi-channel attestation,
allows to attest VMs via virtual TPMs and separately from the
hypervisor -- this is faster and requires less overall attestations,
but the server cannot verify the link between VM and hypervisor
attestations, which comes for free for single-channel attestation.
We design a new approach to provide linked remote attestation which
achieves the best of both worlds: we benefit from the efficiency of
multi-channel attestation while simultaneously allowing attestations
to be linked. Moreover, we formalize a security model for deep
attestation and prove the security of our approach. Our
contribution is agnostic of the precise underlying secure component
(which could be instantiated as a TPM or something equivalent) and can
be of independent interest. Finally, we implement our proposal using
TPM 2.0 and vTPM (KVM/QEMU), and show that it is practical and
efficient.
Thomas Espitau, Pierre-Alain Fouque, François Gérard, Mélissa Rossi, Akira Takahashi, Mehdi Tibouchi, Alexandre Wallet, Yang Yu
This work describes the Mitaka signature scheme: a new hash-and-sign
signature scheme over NTRU lattices which can be seen as a variant of
NIST finalist Falcon. It achieves comparable efficiency but is
considerably simpler, online/offline, and easier to parallelize and
protect against side-channels, thus offering significant advantages from
an implementation standpoint. It is also much more versatile in terms of
parameter selection.
We obtain this signature scheme by replacing the FFO lattice Gaussian
sampler in Falcon by the ``hybrid'' sampler of Ducas and Prest, for
which we carry out a detailed and corrected security analysis. In
principle, such a change can result in a substantial security loss, but
we show that this loss can be largely mitigated using new techniques in
key generation that allow us to construct much higher quality lattice
trapdoors for the hybrid sampler relatively cheaply. This new approach
can also be instantiated on a wide variety of base fields, in contrast
with Falcon's restriction to power-of-two cyclotomics.
We also introduce a new lattice Gaussian sampler with the same quality
and efficiency, but which is moreover compatible with the integral matrix
Gram root technique of Ducas et al., allowing us to avoid floating point
arithmetic. This makes it possible to realize the same signature
scheme as Mitaka efficiently on platforms with poor support for
floating point numbers.
Finally, we describe a provably secure masking of Mitaka. More precisely,
we introduce novel gadgets that allow provable masking at any order at much
lower cost than previous masking techniques for Gaussian sampling-based
signature schemes, for cheap and dependable side-channel protection.
Clemens Hlauschek, Norman Lahr, Robin Leander Schröder
Well before large-scale quantum computers will be available, traditional cryptosystems must
be transitioned to post-quantum secure schemes. The NIST PQC
competition aims to standardize suitable cryptographic schemes. Candidates are
evaluated not only on their formal security strengths, but are also judged
based on the security of the optimized implementation, for example, with regard
to resistance against side-channel attacks.
HQC is a promising code-based key encapsulation scheme and selected as an alternate candidate in the third round of the competition, which puts it on track for getting standardized separately to the finalists, in a fourth round.
Despite having already received heavy scrutiny with regard to side channel attacks, in this paper, we show a novel timing vulnerability in the optimized implementations of HQC, leading to a full secret key recovery. The attack is both practical, requiring only approx. 866,000 idealized decapsulation timing oracle queries in the 128-bit security setting, and structurally different from previously identified attacks on HQC: Previously, exploitable side-channel leakages have been identified in the BCH decoder of a previously submitted version, in the ciphertext check as well as in the PRF of the Fujisaki-Okamoto (FO) transformation employed by several NIST PQC KEM candidates. In contrast, our attack uses the fact that the rejection sampling routine invoked during the deterministic re-encryption of the KEM decapsulation leaks secret-dependent timing information. These timing leaks can be efficiently exploited to recover the secret key when HQC is instantiated with the (now constant-time) BCH decoder, as well as with the RMRS decoder of the current submission. Besides a detailed analysis of the new attack, we discuss possible countermeasures and their limits.
HQC is a promising code-based key encapsulation scheme and selected as an alternate candidate in the third round of the competition, which puts it on track for getting standardized separately to the finalists, in a fourth round.
Despite having already received heavy scrutiny with regard to side channel attacks, in this paper, we show a novel timing vulnerability in the optimized implementations of HQC, leading to a full secret key recovery. The attack is both practical, requiring only approx. 866,000 idealized decapsulation timing oracle queries in the 128-bit security setting, and structurally different from previously identified attacks on HQC: Previously, exploitable side-channel leakages have been identified in the BCH decoder of a previously submitted version, in the ciphertext check as well as in the PRF of the Fujisaki-Okamoto (FO) transformation employed by several NIST PQC KEM candidates. In contrast, our attack uses the fact that the rejection sampling routine invoked during the deterministic re-encryption of the KEM decapsulation leaks secret-dependent timing information. These timing leaks can be efficiently exploited to recover the secret key when HQC is instantiated with the (now constant-time) BCH decoder, as well as with the RMRS decoder of the current submission. Besides a detailed analysis of the new attack, we discuss possible countermeasures and their limits.
08 November 2021
Robin M. Berger, Marcel Tiepelt
SPHINCS+ is a state-of-the-art hash based signature scheme, the security of which is either based on SHA-256, SHAKE-256 or on the Haraka hash function. In this work, we perform an in-depth analysis of how the hash functions are embedded into SPHINCS+ and how the quantum pre-image resistance impacts the security of the signature scheme. Subsequently, we evaluate the cost of implementing Grover’s quantum search algorithm to find a pre-image that admits a universal
forgery.
In particular, we provide quantum implementations of the Haraka and SHAKE-256 hash functions in Q# and consider the efficiency of attacks in the context of fault-tolerant quantum computers. We restrict our findings to SPHINCS+-128 due to the limited security margin of Haraka. Nevertheless, we present an attack that performs better, to the best of
our knowledge, than previously published attacks. We can forge a SPHINCS + -128-Haraka signature in about $1.5 \cdot 2^{90}$ surface code cycles and $2.03 \cdot 10^{6}$ physical qubits, translating to about $1.55 \cdot 2^{101}$ logical-qubit-cycles. For SHAKE-256, the same attack requires $8.65 \cdot 10^{6}$ qubits and $1.6 \cdot 2^{84}$ cycles resulting in about $1.17 \cdot 2^{99}$ logical-qubit-cycles.
Nan Li, Yingjiu Li, Atsuko Miyaji, Yangguang Tian, Tsz Hon Yuen
Ring signature allows a signer to generate a signature on behalf of a set of public keys, while a verifier can verify the signature without identifying who the actual signer is. In Crypto 2021, Yuen et al. proposed a new type of ring signature scheme called DualRing. However, it lacks forward security. The security of DualRing cannot be guaranteed if the signer's secret key is compromised. In this work, we introduce forward-secure DualRing. The singer can periodically update his secret key using our proposed ``split-and-combine" method to mitigate the security risks caused by the leakage of secret keys. We present a practical scheme based on the discrete logarithm assumption. We show a detailed evaluation to validate its practicality.
Meghal Gupta, Rachel Yun Zhang
In interactive coding, Alice and Bob wish to compute some function $f$ of their individual private inputs $x$ and $y$. They do this by engaging in a non-adaptive (fixed order, fixed length) interactive protocol to jointly compute $f(x,y)$. The goal is to do this in an error-resilient way, such that even given some fraction of adversarial corruptions to the protocol, both parties still learn $f(x,y)$.
In this work, we study the optimal error resilience of such a protocol in the face of adversarial bit flip or erasures. While the optimal error resilience of such a protocol over a large alphabet is well understood, the situation over the binary alphabet has remained open. In this work, we resolve this problem of determining the optimal error resilience over binary channels. In particular, we construct protocols achieving $\frac16$ error resilience over the binary bit flip channel and $\frac12$ error resilience over the binary erasure channel, for both of which matching upper bounds are known. We remark that the communication complexity of our binary bit flip protocol is polynomial in the size of the inputs, and the communication complexity of our binary erasure protocol is linear in the size of the minimal noiseless protocol computing $f$.
In this work, we study the optimal error resilience of such a protocol in the face of adversarial bit flip or erasures. While the optimal error resilience of such a protocol over a large alphabet is well understood, the situation over the binary alphabet has remained open. In this work, we resolve this problem of determining the optimal error resilience over binary channels. In particular, we construct protocols achieving $\frac16$ error resilience over the binary bit flip channel and $\frac12$ error resilience over the binary erasure channel, for both of which matching upper bounds are known. We remark that the communication complexity of our binary bit flip protocol is polynomial in the size of the inputs, and the communication complexity of our binary erasure protocol is linear in the size of the minimal noiseless protocol computing $f$.
Meghal Gupta, Yael Tauman Kalai, Rachel Zhang
An error correcting code ($\mathsf{ECC}$) allows a sender to send a message to a receiver such that even if a constant fraction of the communicated bits are corrupted, the receiver can still learn the message correctly. Due to their importance and fundamental nature, $\mathsf{ECC}$s have been extensively studied, one of the main goals being to maximize the fraction of errors that the $\mathsf{ECC}$ is resilient to.
For adversarial erasure errors (over a binary channel) the maximal error resilience of an $\mathsf{ECC}$ is $\frac12$ of the communicated bits. In this work, we break this $\frac12$ barrier by introducing the notion of an interactive error correcting code ($\mathsf{iECC}$) and constructing an $\mathsf{iECC}$ that is resilient to adversarial erasure of $\frac35$ of the total communicated bits. We emphasize that the adversary can corrupt both the sending party and the receiving party, and that both parties' rounds contribute to the adversary's budget.
We also prove an impossibility (upper) bound of $\frac23$ on the maximal resilience of any binary $\mathsf{iECC}$ to adversarial erasures. In the bit flip setting, we prove an impossibility bound of $\frac27$.
For adversarial erasure errors (over a binary channel) the maximal error resilience of an $\mathsf{ECC}$ is $\frac12$ of the communicated bits. In this work, we break this $\frac12$ barrier by introducing the notion of an interactive error correcting code ($\mathsf{iECC}$) and constructing an $\mathsf{iECC}$ that is resilient to adversarial erasure of $\frac35$ of the total communicated bits. We emphasize that the adversary can corrupt both the sending party and the receiving party, and that both parties' rounds contribute to the adversary's budget.
We also prove an impossibility (upper) bound of $\frac23$ on the maximal resilience of any binary $\mathsf{iECC}$ to adversarial erasures. In the bit flip setting, we prove an impossibility bound of $\frac27$.
Eldon Chung, Maciej Obremski, Divesh Aggarwal
The known constructions of negligible error (non-malleable) two-source extractors can be broadly classified in three categories:
(1) Constructions where one source has min-entropy rate about $1/2$, the other source can have small min-entropy rate, but the extractor doesn't guarantee non-malleability.
(2) Constructions where one source is uniform, and the other can have small min-entropy rate, and the extractor guarantees non-malleability when the uniform source is tampered.
(3) Constructions where both sources have entropy rate very close to $1$ and the extractor guarantees non-malleability against the tampering of both sources.
We introduce a new notion of collision resistant extractors and in using it we obtain a strong two source non-malleable extractor where we require the first source to have $0.8$ entropy rate and the other source can have min-entropy polylogarithmic in the length of the source.
We show how the above extractor can be applied to obtain a non-malleable extractor with output rate $\frac 1 2$, which is optimal. We also show how, by using our extractor and extending the known protocol, one can obtain a privacy amplification secure against memory tampering where the size of the secret output is almost optimal.
(1) Constructions where one source has min-entropy rate about $1/2$, the other source can have small min-entropy rate, but the extractor doesn't guarantee non-malleability.
(2) Constructions where one source is uniform, and the other can have small min-entropy rate, and the extractor guarantees non-malleability when the uniform source is tampered.
(3) Constructions where both sources have entropy rate very close to $1$ and the extractor guarantees non-malleability against the tampering of both sources.
We introduce a new notion of collision resistant extractors and in using it we obtain a strong two source non-malleable extractor where we require the first source to have $0.8$ entropy rate and the other source can have min-entropy polylogarithmic in the length of the source.
We show how the above extractor can be applied to obtain a non-malleable extractor with output rate $\frac 1 2$, which is optimal. We also show how, by using our extractor and extending the known protocol, one can obtain a privacy amplification secure against memory tampering where the size of the secret output is almost optimal.
Amirhossein Ebrahimi, Francesco Regazzoni, Paolo Palmieri
In a differential cryptanalysis attack, the attacker tries to observe a block cipher's behavior under an input difference: if the system's resulting output differences show any non-random behavior, a differential distinguisher is obtained. While differential cryptanlysis has been known for several decades, Gohr was the first to propose in 2019 the use of machine learning (ML) to build a distinguisher.
In this paper, we present the first Partial Differential (PD) ML-distinguisher, and demonstrate its effectiveness on lightweight cipher SPECK32/64. As a PD-ML-distinguisher is based on a selection of bits rather than all bits in a block, we also study if different selections of bits have different impact in the accuracy of the distinguisher, and we find that to be the case. More importantly, we also establish that certain bits have reliably higher effectiveness than others, through a series of independent experiments on different datasets, and we propose an algorithm for assigning an effectiveness score to each bit in the block. By selecting the highest scoring bits, we are able to train a partial ML-distinguisher over 8-bits that is almost as accurate as an equivalent ML-distinguisher over the entire 32 bits (68.8% against 72%), for six rounds of SPECK32/64. The reduced input size implies a significant reduction in the complexity of achieving a distinguisher, and also leads to a reduction in the number of bits of possible subkeys to be guessed in a potential subsequent key recovery attack. These results may therefore open the way to the application of (partial) ML-based distinguishers to ciphers whose block size has so far been considered too large.
sowle, koe
This article explores a Proof-of-Stake mining algorithm in an environment where amounts
are hidden with homomorphic commitments, in particular, using confidential transactions. Our
goal was to avoid revealing amounts and other sensitive information (like which output was used
to stake a given block) to blockchain observers when doing staking.
Our contribution is a Proof-of-Stake mining scheme that does not reveal amounts and is
compatible with ring confidential transactions.
06 November 2021
Ruslan Skuratovskii, Alexandr Kalenyk
Improving the reliability of account protection in the blockchain is one of the most important goals of the entire cryptographic arsenal used in the blockchain and cryptocurrency exchange. We propose a new threshold multisignature scheme with a double boundary condition. Access to funds stored on a multisig wallet is possible only when two or more signatures are provided at the same time.
Emile Hautefeuille
This paper presents a new public key cryptography scheme using multivariate polynomials in a finite field. Each multivariate polynomial from the public key is obtained by secretly and repeatedly composing quadratic polynomials (of a single variable) with linear combinations of the input variables. Decoding a message involves recursively computing the roots of these quadratic polynomials, and finally solving some linear systems. The main drawback of this scheme is the length of the public key.
Leonie Reichert, Marcel Pazelt, Björn Scheuermann
—Many solutions have been proposed to improve manual contact tracing for infectious diseases through automation. Privacy is crucial for the deployment of such a system as it greatly influences adoption. Approaches for digital contact tracing like Google Apple Exposure Notification (GAEN) protect the privacy of users by decentralizing risk scoring. But GAEN leaks information about diagnosed users as ephemeral pseudonyms are broadcast to everyone. To combat deanonymisation based on the time of encounter while providing extensive risk scoring functionality we propose to use a private set intersection (PSI) protocol based on garbled circuits. Using oblivious programmable pseudo random functions PSI (PPRF-PSI) , we implement our solution CERTAIN which leaks no information to querying users other than one risk score for each of the last 14 days representing their risk of infection. We implement payload inclusion for OPPRF-PSI and evaluate the efficiency and performance of different risk scoring mechanisms on an Android device
Hao Chung, Elaine Shi
In blockchains such as Bitcoin and Ethereum, users compete in a transaction fee auction to get their transactions confirmed in the next block. A line of recent works set forth the desiderata for a “dream” transaction fee mechanism (TFM), and explored whether such a mechanism existed. A dream TFM should satisfy 1) user incentive compatibility (UIC), i.e., truthful bidding should be a user’s dominant strategy; 2) miner incentive compatibility (MIC), i.e., the miner’s dominant strategy is to faithfully implement the prescribed mechanism; and 3) miner-user side contract proofness (SCP), i.e., no coalition of the miner and one or more user(s) can increase their joint utility by deviating from the honest behavior. The weakest form of SCP is called 1-SCP, where we only aim to provide resilience against the collusion of the miner and a single user. Sadly, despite the various attempts, to the best of knowledge, no existing mechanism can satisfy all three properties in all situations.
Since the TFM departs from classical mechanism design in modeling and assumptions, to date, our understanding of the design space is relatively little. In this paper, we further unravel the mathematical structure of transaction fee mechanism design by proving the following results: - Can we have a dream TFM? We prove a new impossibility result: assuming finite block size, no single-parameter, non-trivial, possibly randomized TFM can simultaneously satisfy UIC and 1-SCP. Consequently, no non-trivial TFM can satisfy all three desired properties simultaneously. This answers an important open question raised by Roughgarden in his recent work. - Rethinking the incentive compatibility notions. We observe that the prevalently adopted incentive compatibility notions may be too draconian and somewhat flawed. We rectify the existing modeling techniques, and suggest a relaxed incentive compatibility notion that captures additional hidden costs of strategic deviation. We construct a new mechanism called the “burning second-price auction”, and show that it indeed satisfies the new incentive compatibility notions. We additionally prove that the use of randomness is necessary under the new incentive compatibility notions for “useful” mechanisms that resist the coalitions of the miner and at least 2 users. - Do the new design elements make a difference? Unlike classical mechanisms, TFMs may employ a couple new design elements that are idiosyncratic to blockchains. For example, a burn rule (employed by Ethereum’s EIP-1559) allows part to all of the payment from the users to be burnt rather than paid to the miner. Some mechanisms also allow unconfirmed transactions to be included in the block, to set the price for others. Our work unveils how these new design elements actually make a difference in TFM design, allowing us to achieve incentive compatible properties that would otherwise be impossible.
Since the TFM departs from classical mechanism design in modeling and assumptions, to date, our understanding of the design space is relatively little. In this paper, we further unravel the mathematical structure of transaction fee mechanism design by proving the following results: - Can we have a dream TFM? We prove a new impossibility result: assuming finite block size, no single-parameter, non-trivial, possibly randomized TFM can simultaneously satisfy UIC and 1-SCP. Consequently, no non-trivial TFM can satisfy all three desired properties simultaneously. This answers an important open question raised by Roughgarden in his recent work. - Rethinking the incentive compatibility notions. We observe that the prevalently adopted incentive compatibility notions may be too draconian and somewhat flawed. We rectify the existing modeling techniques, and suggest a relaxed incentive compatibility notion that captures additional hidden costs of strategic deviation. We construct a new mechanism called the “burning second-price auction”, and show that it indeed satisfies the new incentive compatibility notions. We additionally prove that the use of randomness is necessary under the new incentive compatibility notions for “useful” mechanisms that resist the coalitions of the miner and at least 2 users. - Do the new design elements make a difference? Unlike classical mechanisms, TFMs may employ a couple new design elements that are idiosyncratic to blockchains. For example, a burn rule (employed by Ethereum’s EIP-1559) allows part to all of the payment from the users to be burnt rather than paid to the miner. Some mechanisms also allow unconfirmed transactions to be included in the block, to set the price for others. Our work unveils how these new design elements actually make a difference in TFM design, allowing us to achieve incentive compatible properties that would otherwise be impossible.
Akihiro Mizutani, Yuki Takeuchi, Ryo Hiromasa, Yusuke Aikawa, Seiichiro Tani
In the seminal paper [Metger and Vidick, Quantum ’21], they proposed a computational self-testing protocol for Bell states in a single quantum device. Their protocol relies on the fact that the target states are stabilizer states, and hence it is highly non-trivial to reveal whether the other class of quantum states, non-stabilizer states, can be self-tested within their framework. Among non-stabilizer states, magic states are indispensable resources for universal quantum computation. In this letter, we show that a magic state for the CCZ gate can be self-tested while that for the T gate cannot. Our result is applicable to a proof of quantumness, where we can classically verify whether a quantum device generates a quantum state having non zero magic.