IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
07 March 2022
Zvika Brakerski, Pedro Branco, Nico Döttling, Sihang Pu
ePrint Report
We show that it is possible to perform $n$ independent copies of $1$-out-of-$2$ oblivious transfer in two messages, where the communication complexity of the receiver and sender (each) is $n(1+o(1))$ for sufficiently large $n$. Note that this matches the information-theoretic lower bound. Prior to this work, this was only achievable by using the heavy machinery of rate-$1$ fully homomorphic encryption (Rate-$1$ FHE, Brakerski et al., TCC 2019).
To achieve rate-$1$ both on the receiver's and sender's end, we use the LPN assumption, with slightly sub-constant noise rate $1/m^{\epsilon}$ for any $\epsilon>0$ together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHE-based solution which inherently requires an expensive ``bootstrapping'' operation. We believe that in terms of efficiency we compare favorably to existing batch-OT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE).
For our DDH-based solution we develop a new technique that may be of independent interest. We show that it is possible to ``emulate'' the binary group $\mathbb{Z}_2$ (or any other small-order group) inside a prime-order group $\mathbb{Z}_p$ in a function-private manner. That is, $\mathbb{Z}_2$ operations are mapped to $\mathbb{Z}_p$ operations such that the outcome of the latter do not reveal additional information beyond the $\mathbb{Z}_2$ outcome. Our encoding technique uses the discrete Gaussian distribution, which to our knowledge was not done before in the context of DDH.
To achieve rate-$1$ both on the receiver's and sender's end, we use the LPN assumption, with slightly sub-constant noise rate $1/m^{\epsilon}$ for any $\epsilon>0$ together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHE-based solution which inherently requires an expensive ``bootstrapping'' operation. We believe that in terms of efficiency we compare favorably to existing batch-OT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE).
For our DDH-based solution we develop a new technique that may be of independent interest. We show that it is possible to ``emulate'' the binary group $\mathbb{Z}_2$ (or any other small-order group) inside a prime-order group $\mathbb{Z}_p$ in a function-private manner. That is, $\mathbb{Z}_2$ operations are mapped to $\mathbb{Z}_p$ operations such that the outcome of the latter do not reveal additional information beyond the $\mathbb{Z}_2$ outcome. Our encoding technique uses the discrete Gaussian distribution, which to our knowledge was not done before in the context of DDH.
Cyprien Delpech de Saint Guilhem, Emmanuela Orsini, Titouan Tanguy, Michiel Verbauwhede
ePrint Report
We present the first constant-round and concretely efficient zero-knowledge proof protocol for RAM programs using any stateless zero-knowledge proof system for Boolean or arithmetic circuits. Both the communication complexity and the prover and verifier run times asymptotically scale linearly in the size of the memory and the run time of the RAM program; we demonstrate concrete efficiency with performance results of our C++ implementation. At the core of this work is an arithmetic circuit which verifies the consistency of a list of memory access tuples in zero-knowledge. Using this circuit, we construct a protocol to realize an ideal array functionality using a generic stateless and public-coin zero-knowledge functionality. We prove its UC security and show how it can then be expanded to provide proofs of entire RAM programs. We also extend the Limbo zero-knowledge protocol (ACM CCS 2021) to UC-realize the zero-knowledge functionality that our protocol requires.
The C++ implementation of our array access protocol extends that of Limbo and provides interactive proofs with 40 bits of statistical security with an amortized cost of 0.7ms of prover time and 6KB of communication per memory access, indenpendently of the size of the memory; with multi-threading, this cost is reduced to 0.2ms and 4.1KB respectively. This performance of our public-coin protocol approaches that of private-coin protocol BubbleRAM (ACM CCS 2020, 0.15ms and 1.5KB per access).
Shahar P. Cohen, Moni Naor
ePrint Report
We study communication complexity in computational settings where bad inputs may exist, but they should be hard to find for any computationally bounded adversary.
We define a model where there is a source of public randomness but the inputs are chosen by a computationally bounded adversarial participant after seeing the public randomness. We show that breaking the known communication lower bounds of the private coins model in this setting is closely connected to known cryptographic assumptions. We consider the simultaneous messages model and the interactive communication model and show that for any non trivial predicate (with no redundant rows, such as equality):
1. Breaking the $ \Omega(\sqrt n) $ bound in the simultaneous message case or the $ \Omega(\log n) $ bound in the interactive communication case, implies the existence of distributional collision-resistant hash functions (dCRH). This is shown using techniques from Babai and Kimmel (CCC '97). Note that with a CRH the lower bounds can be broken.
2. There are no protocols of constant communication in this preset randomness settings (unlike the plain public randomness model).
The other model we study is that of a stateful ``free talk", where participants can communicate freely before the inputs are chosen and may maintain a state, and the communication complexity is measured only afterwards. We show that efficient protocols for equality in this model imply secret key-agreement protocols in a constructive manner. On the other hand, secret key-agreement protocols imply optimal (in terms of error) protocols for equality.
1. Breaking the $ \Omega(\sqrt n) $ bound in the simultaneous message case or the $ \Omega(\log n) $ bound in the interactive communication case, implies the existence of distributional collision-resistant hash functions (dCRH). This is shown using techniques from Babai and Kimmel (CCC '97). Note that with a CRH the lower bounds can be broken.
2. There are no protocols of constant communication in this preset randomness settings (unlike the plain public randomness model).
The other model we study is that of a stateful ``free talk", where participants can communicate freely before the inputs are chosen and may maintain a state, and the communication complexity is measured only afterwards. We show that efficient protocols for equality in this model imply secret key-agreement protocols in a constructive manner. On the other hand, secret key-agreement protocols imply optimal (in terms of error) protocols for equality.
Peihan Miao, Sikhar Patranabis, Gaven Watson
ePrint Report
Updatable Encryption (UE) and Proxy Re-encryption (PRE) allow re-encrypting a ciphertext from one key to another in the symmetric-key and public-key settings, respectively, without decryption. A longstanding open question has been the following: do unidirectional UE and PRE schemes (where ciphertext re-encryption is permitted in only one direction) necessarily require stronger/more structured assumptions as compared to their bidirectional counterparts? Known constructions of UE and PRE seem to exemplify this “gap” – while bidirectional schemes can be realized as relatively simple extensions of public-key encryption from standard assumptions such as DDH or LWE, unidirectional schemes typically rely on stronger assumptions such as FHE or indistinguishability obfuscation (iO), or highly structured cryptographic tools such as bilinear maps or lattice trapdoors.
In this paper, we bridge this gap by showing the first feasibility results for realizing unidirectional UE and PRE from a new generic primitive that we call Key and Plaintext Homomorphic Encryption (KPHE) – a public-key encryption scheme that supports additive homomorphisms on its plaintext and key spaces simultaneously. We show that KPHE can be instantiated from DDH or LWE. This yields, in particular, the first constructions of unidirectional UE and PRE from DDH, as well as the first constructions of unidirectional UE and PRE from LWE that do not resort to FHE or lattice trapdoors.
Our constructions achieve the strongest notions of post-compromise security in the standard model. Our UE schemes also achieve “backwards-leak directionality” of key updates (a notion we discuss is equivalent, from a security perspective, to that of unidirectionality with no-key updates). Our results establish (somewhat surprisingly) that unidirectional UE and PRE schemes satisfying such strong security notions do not, in fact, require stronger/more structured cryptographic assumptions as compared to bidirectional schemes.
In this paper, we bridge this gap by showing the first feasibility results for realizing unidirectional UE and PRE from a new generic primitive that we call Key and Plaintext Homomorphic Encryption (KPHE) – a public-key encryption scheme that supports additive homomorphisms on its plaintext and key spaces simultaneously. We show that KPHE can be instantiated from DDH or LWE. This yields, in particular, the first constructions of unidirectional UE and PRE from DDH, as well as the first constructions of unidirectional UE and PRE from LWE that do not resort to FHE or lattice trapdoors.
Our constructions achieve the strongest notions of post-compromise security in the standard model. Our UE schemes also achieve “backwards-leak directionality” of key updates (a notion we discuss is equivalent, from a security perspective, to that of unidirectionality with no-key updates). Our results establish (somewhat surprisingly) that unidirectional UE and PRE schemes satisfying such strong security notions do not, in fact, require stronger/more structured cryptographic assumptions as compared to bidirectional schemes.
Muhammad ElSheikh, Amr M. Youssef
ePrint Report
The Open Vote Network is a self-tallying decentralized e-voting protocol suitable for boardroom elections. Currently, it has two Ethereum-based implementations: the first, by McCorry et al., has a scalability issue since all the computations are performed on-chain. The second implementation, by Seifelnasr et al., solves this issue partially by assigning a part of the heavy computations to an off-chain untrusted administrator in a verifiable manner. As a side effect, this second implementation became not dispute-free; there is a need for a tally dispute phase where an observer interrupts the protocol when the administrator cheats, i.e., announces a wrong tally result. In this work, we propose a new smart contract design to tackle the problems in the previous implementations by (i) preforming all the heavy computations off-chain hence achieving higher scalability, and (ii) utilizing zero-knowledge Succinct Non-interactive Argument of Knowledge (zk-SNARK) to verify the correctness of the off-chain computations, hence maintaining the dispute-free property. To demonstrate the effectiveness of our design, we develop prototype implementations on Ethereum and conduct multiple experiments for different implementation options that show a trade-off between the zk-SNARK proof generation time and the smart contract gas cost, including an implementation in which the smart contract consumes a constant amount of gas independent of the number of voters.
Ashrujit Ghoshal, Ilan Komargodski
ePrint Report
We study the power of preprocessing adversaries in finding bounded-length collisions in the widely used Merkle-Damgård (MD) hashing in the random oracle model. Specifically, we consider adversaries with arbitrary $S$-bit advice about the random oracle and can make at most $T$ queries to it. Our goal is to characterize the advantage of such adversaries in finding a $B$-block collision in an MD hash function constructed using the random oracle with range size $N$ as the compression function (given a random salt).
The answer to this question is completely understood for very large values of $B$ (essentially $\Omega(T)$) as well as for $B=1,2$. For $B\approx T$, Coretti et al.~(EUROCRYPT '18) gave matching upper and lower bounds of $\tilde\Theta(ST^2/N)$. Akshima et al.~(CRYPTO '20) observed that the attack of Coretti et al.\ could be adapted to work for any value of $B>1$, giving an attack with advantage $\tilde\Omega(STB/N + T^2/N)$. Unfortunately, they could only prove that this attack is optimal for $B=2$. Their proof involves a compression argument with exhaustive case analysis and, as they claim, a naive attempt to generalize their bound to larger values of B (even for $B=3$) would lead to an explosion in the number of cases needed to be analyzed, making it unmanageable. With the lack of a more general upper bound, they formulated the STB conjecture, stating that the best-possible advantage is $\tilde O(STB/N + T^2/N)$ for any $B>1$.
In this work, we confirm the STB conjecture in many new parameter settings. For instance, in one result, we show that the conjecture holds for all constant values of $B$, significantly extending the result of Akshima et al. Further, using combinatorial properties of graphs, we are able to confirm the conjecture even for super constant values of $B$, as long as some restriction is made on $S$. For instance, we confirm the conjecture for all $B \le T^{1/4}$ as long as $S \le T^{1/8}$. Technically, we develop structural characterizations for bounded-length collisions in MD hashing that allow us to give a compression argument in which the number of cases needed to be handled does not explode.
In this work, we confirm the STB conjecture in many new parameter settings. For instance, in one result, we show that the conjecture holds for all constant values of $B$, significantly extending the result of Akshima et al. Further, using combinatorial properties of graphs, we are able to confirm the conjecture even for super constant values of $B$, as long as some restriction is made on $S$. For instance, we confirm the conjecture for all $B \le T^{1/4}$ as long as $S \le T^{1/8}$. Technically, we develop structural characterizations for bounded-length collisions in MD hashing that allow us to give a compression argument in which the number of cases needed to be handled does not explode.
Ittai Abraham, Danny Dolev, Ittay Eyal, Joseph Y. Halpern
ePrint Report
Proof-of-work blockchain protocols rely on incentive compatibility. System participants, called miners, generate blocks that form a directed acyclic graph (blockdag). The protocols aim to compensate miners based on their mining power, that is, the fraction of computational resources they control. The protocol designates rewards, striving to make the prescribed protocol be the miners' best response. Nakamoto's Bitcoin protocol achieves this for miners controlling up to almost 1/4 of the total mining power, and the Ethereum protocol does about the same. The state of the art in increasing this bound is Fruitchain, which works with a bound of 1/2. Fruitchain guarantees that miners can increase their revenue by only a negligible amount if they deviate. It is thus an epsilon-Nash equilibrium, for a small epsilon. However, Fruitchain's mechanism allows a rational miner to deviate without penalty; we show that a simple practical deviation guarantees a miner a small increase in expected utility without any risk. This deviation results in a violation of the protocol desiderata. We conclude that, in our context, epsilon-Nash equilibrium is a rather fragile solution concept.
We propose a more robust approach that we call epsilon-sure Nash equilibrium, in which each miner's behavior is almost always a strict best response, and present Colordag, the first blockchain protocol that is an epsilon-sure Nash equilibrium for miners with less than 1/2 of the mining power. To achieve this, Colordag utilizes three techniques. First, it assigns blocks colors; rewards are assigned based on each color separately. By choosing sufficiently many colors, we can make sensitivity to network latency negligible. Second, Colordag penalizes forking---intentional bifurcation of the blockdag. Third, Colordag prevents miners from retroactively changing rewards. All this results in an epsilon-sure Nash equilibrium: Even when playing an extremely strong adversary with perfect knowledge of the future (specifically, when agents will generate blocks and when messages will arrive), correct behavior is a strict best response with high probability.
We propose a more robust approach that we call epsilon-sure Nash equilibrium, in which each miner's behavior is almost always a strict best response, and present Colordag, the first blockchain protocol that is an epsilon-sure Nash equilibrium for miners with less than 1/2 of the mining power. To achieve this, Colordag utilizes three techniques. First, it assigns blocks colors; rewards are assigned based on each color separately. By choosing sufficiently many colors, we can make sensitivity to network latency negligible. Second, Colordag penalizes forking---intentional bifurcation of the blockdag. Third, Colordag prevents miners from retroactively changing rewards. All this results in an epsilon-sure Nash equilibrium: Even when playing an extremely strong adversary with perfect knowledge of the future (specifically, when agents will generate blocks and when messages will arrive), correct behavior is a strict best response with high probability.
Olivier Blazy, Sayantan Mukherjee, Huyen Nguyen, Duong Hieu Phan, Damien Stehle
ePrint Report
Broadcast Encryption is a fundamental cryptographic primitive, that gives the ability to send a secure message to any chosen target set among registered users.
In this work, we investigate broadcast encryption with anonymous revocation, in which ciphertexts
do not reveal any information on which users have been revoked.
We provide a scheme whose ciphertext size grows linearly with the number of revoked users. Moreover, our system also achieves traceability in the black-box confirmation model.
Technically, our contribution is threefold. First, we develop a generic transformation of linear functional encryption toward trace-and-revoke systems for 1-bit message space. It is inspired from the transformation by Agrawal {et al} (CCS'17) with the novelty of achieving anonymity.
Our second contribution is to instantiate the underlying linear functional encryptions from standard assumptions. We propose a $\mathsf{DDH}$-based construction which does no longer require discrete logarithm evaluation during the decryption and thus significantly improves the performance compared to the $\mathsf{DDH}$-based construction of Agrawal {et al}.
In the LWE-based setting, we tried to instantiate our construction by relying on the scheme from Wang {et al} (PKC'19) only to find an attack on this scheme.
Our third contribution is to extend the 1-bit encryption from the generic transformation to $n$-bit encryption. By introducing matrix multiplication functional encryption, which essentially performs a fixed number of parallel calls on functional encryptions with the same randomness, we can prove the security of the final scheme with a tight reduction that does not depend on $n$, in contrast to employing the hybrid argument.
Marina Krček, Thomas Ordas, Daniele Fronte, Stjepan Picek
ePrint Report
We consider finding as many faults as possible on the target device in the laser fault injection security evaluation. Since the search space is large, we require efficient search methods. Recently, an evolutionary approach using a memetic algorithm was proposed and shown to find more interesting parameter combinations than random search, which is commonly used. Unfortunately, once a variation on the bench or target is introduced, the process must be repeated to find suitable parameter combinations anew.
To negate the effect of variation, we propose a novel method combining memetic algorithm with machine learning approach called a decision tree. Our approach improves the memetic algorithm by using prior knowledge of the target introduced in the initial phase of the memetic algorithm. In our experiments, the decision tree rules enhance the performance of the memetic algorithm by finding more interesting faults on different samples of the same target. Our approach shows more than two orders of magnitude better performance than random search and up to 60% better performance than previous state-of-the-art results with a memetic algorithm. Another advantage of our approach is human-readable rules, allowing the first insights into the explainability of target characterization for laser fault injection.
To negate the effect of variation, we propose a novel method combining memetic algorithm with machine learning approach called a decision tree. Our approach improves the memetic algorithm by using prior knowledge of the target introduced in the initial phase of the memetic algorithm. In our experiments, the decision tree rules enhance the performance of the memetic algorithm by finding more interesting faults on different samples of the same target. Our approach shows more than two orders of magnitude better performance than random search and up to 60% better performance than previous state-of-the-art results with a memetic algorithm. Another advantage of our approach is human-readable rules, allowing the first insights into the explainability of target characterization for laser fault injection.
Ben Smyth, Michael R. Clarkson
ePrint Report
We explore definitions of verifiability by Juels et al. (2010), Cortier et al. (2014), and Kiayias et al. (2015). We discover that voting systems vulnerable to attacks can be proven to satisfy each of those definitions and conclude they are unsuitable for the analysis of voting systems. Our results will fuel the exploration for a new definition.
Yu Long Chen, Avijit Dutta, Mridul Nandi
ePrint Report
At CRYPTO 2019, Chen et al. have shown a beyond the birthday bound secure $n$-bit to $n$-bit PRF based on public random permutations. Followed by the work, Dutta and Nandi have proposed a beyond the birthday bound secure nonce based MAC $\textsf{nEHtM}_p$ based on public random permutation. In particular, the authors have shown that $\textsf{nEHtM}_p$ achieves tight $2n/3$-bit security ({\em with respect to the state size of the permutation}) in the single-user setting, and their proven bound gracefully degrades with the repetition of the nonces. However, we have pointed out that their security proof is not complete (albeit it does not invalidate their security claim). In this paper, we propose a minor variant of $\textsf{nEHtM}_p$ construction, called $\textsf{nEHtM}^*_p$ and show that it achieves a tight $2n/3$ bit security in the multi-user setting. Moreover, the security bound of our construction also degrades gracefully with the repetition of nonces. Finally, we have instantiated our construction with the PolyHash function to realize a concrete beyond the birthday bound secure public permutation-based MAC, $\textsf{nEHtM}_p^+$ in the multi-user setting.
Nick Frymann, Daniel Gardham, Mark Manulis
ePrint Report
The W3C's WebAuthn standard employs digital signatures to offer phishing protection and unlinkability on the web using authenticators which manage keys on behalf of users. This introduces challenges when the account owner wants to delegate certain rights to a proxy user, such as to access their accounts or perform actions on their behalf, as delegation must not undermine the decentralisation, unlinkability, and attestation properties provided by WebAuthn.
We present two approaches, called remote and direct delegation of WebAuthn credentials, maintaining the standard's properties. Both approaches are compatible with Yubico's recent Asynchronous Remote Key Generation (ARKG) primitive proposed for backing up credentials. For remote delegation, the account owner stores delegation credentials at the relying party on behalf of proxies, whereas the direct variant uses a delegation-by-warrant approach, through which the proxy receives delegation credentials from the account owner and presents them later to the relying party. To realise direct delegation we introduce Proxy Signature with Unlinkable Warrants (PSUW), a new proxy signature scheme that extends WebAuthn's unlinkability property to proxy users and can be constructed generically from ARKG.
We discuss an implementation of both delegation approaches, designed to be compatible with WebAuthn, including extensions required for CTAP, and provide a software-based prototype demonstrating overall feasibility. On the performance side, we observe only a minor increase of a few milliseconds in the signing and verification times for delegated WebAuthn credentials based on ARKG and PSUW primitives. We also discuss additional functionality, such as revocation and permissions management, and mention usability considerations.
We present two approaches, called remote and direct delegation of WebAuthn credentials, maintaining the standard's properties. Both approaches are compatible with Yubico's recent Asynchronous Remote Key Generation (ARKG) primitive proposed for backing up credentials. For remote delegation, the account owner stores delegation credentials at the relying party on behalf of proxies, whereas the direct variant uses a delegation-by-warrant approach, through which the proxy receives delegation credentials from the account owner and presents them later to the relying party. To realise direct delegation we introduce Proxy Signature with Unlinkable Warrants (PSUW), a new proxy signature scheme that extends WebAuthn's unlinkability property to proxy users and can be constructed generically from ARKG.
We discuss an implementation of both delegation approaches, designed to be compatible with WebAuthn, including extensions required for CTAP, and provide a software-based prototype demonstrating overall feasibility. On the performance side, we observe only a minor increase of a few milliseconds in the signing and verification times for delegated WebAuthn credentials based on ARKG and PSUW primitives. We also discuss additional functionality, such as revocation and permissions management, and mention usability considerations.
Sílvia Casacuberta, Julia Hesse, Anja Lehmann
ePrint Report
In recent years, oblivious pseudorandom functions (OPRFs) have become a ubiquitous primitive used in cryptographic protocols and privacy-preserving technologies. The growing interest in OPRFs, both theoretical and applied, has produced a vast number of different constructions and functionality variations. In this paper, we provide a systematic overview of how to build and use OPRFs. We first categorize existing OPRFs into essentially four families based on their underlying PRF (Naor-Reingold, Dodis-Yampolskiy, Hashed Diffie-Hellman, and generic constructions). This categorization allows us to give a unified presentation of all oblivious evaluation methods in the literature, and to understand which properties OPRFs can (or cannot) have. We further demonstrate the theoretical and practical power of OPRFs by visualizing them in the landscape of cryptographic primitives, and by providing a comprehensive overview of how OPRFs are leveraged for improving the privacy of internet users.
Our work systematizes 15 years of research on OPRFs and provides inspiration for new OPRF constructions and applications thereof.
Jakub Breier, Xiaolu Hou
ePrint Report
Fault injection attacks (FIA) are a class of active physical attacks, mostly used for malicious purposes such as extraction of cryptographic keys, privilege escalation, attacks on neural network implementations. There are many techniques that can be used to cause the faults in integrated circuits, many of them coming from the area of failure analysis.
In this paper we tackle the topic of practicality of FIA. We analyze the most commonly used techniques that can be found in the literature, such as voltage/clock glitching, electromagnetic pulses, lasers, and Rowhammer attacks. To summarize, FIA can be mounted on most commonly used architectures from ARM, Intel, AMD, by utilizing injection devices that are often below the thousand dollar mark. Therefore, we believe these attacks can be considered practical in many scenarios, especially when the attacker can physically access the target device.
Irem Keskinkurt Paksoy, Murat Cenk
ePrint Report
The Number Theoretic Transform (NTT), Toom-Cook, and
Karatsuba are the most commonly used algorithms for implementing
lattice-based ?nalists of the NIST PQC competition. In this paper, we
propose Toeplitz matrix-vector product (TMVP) based algorithms for
multiplication for all parameter sets of NTRU. We implement the pro-
posed algorithms on ARM Cortex-M4. The results show that TMVP-
based multiplication algorithms using the four-way TMVP formula are
more e?cient for NTRU. Our algorithms outperform the Toom-Cook
method by up to 25.3%, and the NTT method by up to 19.8%. More-
over, our algorithms require less stack space than the others in most
cases. We also observe the impact of these improvements on the overall
performance of NTRU. We speed up the encryption, decryption, en-
capsulation, and decapsulation by up to 13.7%,17.5%,3.5%, and 14.1%,
respectively, compared to state-of-the-art implementation.
Yanhong Fan,Muzhou Li,Chao Niu,Zhenyu Lu,Meiqin Wang
ePrint Report
SKINNY-AEAD is one of the second-round candidates of the Lightweight Cryptography Standardization project held by NIST. SKINNY-AEAD M1 is the primary member of six SKINNY-AEAD schemes, while SKINNY-AEAD M3 is another member with a small tag. In the design document, only security analyses of their underlying primitive SKINNY-128-384 are provided. Besides, there are no valid third-party analyses on SKINNY-AEAD M1/M3 according to our knowledge. Therefore, this paper focuses on constructing the first third-party security analyses on them under a nonce-respecting scenario. By taking the encryption mode of SKINNY-AEAD
into consideration and exploiting several properties of SKINNY, we can deduce some necessary constraints on the input and tweakey differences of related-tweakey impossible differential distinguishers. Under these constraints, we can find distinguishers suitable for mounting powerful tweakey recovery attacks. With the help of the automatic searching algorithms based on STP, we find some 14-round distinguishers. Based on one of these distinguishers, we mount a 20-round and an 18-round tweakey recovery attack on SKINNY-AEAD M1/M3. To the best of our knowledge, all these attacks are the best ones so far.
Nir Bitansky, Zvika Brakerski, Yael Tauman Kalai
ePrint Report
Is it possible to convert classical reductions into post-quantum ones? It is customary to argue that while this is problematic in the interactive setting, non-interactive reductions do carry over. However, when considering quantum auxiliary input, this conversion results in a non-constructive post-quantum reduction that requires duplicating the quantum auxiliary input, which is in general inefficient or even impossible. This violates the win-win premise of provable cryptography: an attack against a cryptographic primitive should lead to an algorithmic advantage.
We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the post-quantum setting in a constructive manner. We show that any non-interactive non-adaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made post-quantum constructive. In contrast, assumptions with super-polynomial solution space (such as general search assumptions) cannot be generally converted.
Along the way, we make several additional contributions:
1. We put forth a framework for reductions (or general interaction) with stateful solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting. 2. A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a super-polynomial solution space cannot be generically ``restored'' post-measurement. This shows that the novel rewinding technique of Chiesa et al. (FOCS 2021) is tight in the sense that it cannot be extended beyond a polynomial measurement space.
We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the post-quantum setting in a constructive manner. We show that any non-interactive non-adaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made post-quantum constructive. In contrast, assumptions with super-polynomial solution space (such as general search assumptions) cannot be generally converted.
Along the way, we make several additional contributions:
1. We put forth a framework for reductions (or general interaction) with stateful solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting. 2. A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a super-polynomial solution space cannot be generically ``restored'' post-measurement. This shows that the novel rewinding technique of Chiesa et al. (FOCS 2021) is tight in the sense that it cannot be extended beyond a polynomial measurement space.
Yi Deng, Shunli Ma, Xinxuan Zhang, Hailong Wang, Xuyang Song, Xiang Xie
ePrint Report
Threshold Signatures allow $n$ parties to share the ability of issuing digital signatures so that any coalition of size at least $t+1$ can sign, whereas groups of $t$ or fewer players cannot. The currently known class-group-based threshold ECDSA constructions are either inefficient (requiring parallel-repetition of the underlying zero knowledge proof with small challenge space) or requiring rather non-standard low order assumption. In this paper, we present efficient threshold ECDSA protocols from encryption schemes based on class groups with neither assuming the low order assumption nor parallel repeating the underlying zero knowledge proof, yielding a significant efficiency improvement in the key generation over previous constructions.
Along the way we introduce a new notion of promise $\Sigma$-protocol that satisfies only a weaker soundness called promise extractability. An accepting promise $\Sigma$-proof for statements related to class-group-based encryptions does not establish the truth of the statement but provides security guarantees (promise extractability) that are sufficient for our applications. We also show how to simulate homomorphic operations on a (possibly invalid) class-group-based encryption whose correctness has been proven via our promise $\Sigma$-protocol. We believe that these techniques are of independent interest and applicable to other scenarios where efficient zero knowledge proofs for statements related to class-group is required.
Along the way we introduce a new notion of promise $\Sigma$-protocol that satisfies only a weaker soundness called promise extractability. An accepting promise $\Sigma$-proof for statements related to class-group-based encryptions does not establish the truth of the statement but provides security guarantees (promise extractability) that are sufficient for our applications. We also show how to simulate homomorphic operations on a (possibly invalid) class-group-based encryption whose correctness has been proven via our promise $\Sigma$-protocol. We believe that these techniques are of independent interest and applicable to other scenarios where efficient zero knowledge proofs for statements related to class-group is required.
Vasyl Ustimenko
ePrint Report
New explicit constructions of infinite families of finite small world graphs of large girth with well defined projective limits which is an infinite tree are described. The applications of these objects to constructions of LDPC codes and cryptographic algorithms are shortly observed.
We define families of homogeneous algebraic graphs of large girth over commutative ring K.
For each commutative integrity ring K with |K|>2 we introduce a family of bipartite homogeneous algebraic graphs of large girth over K
formed by graphs with sets of points and lines isomorphic K^n, n>1 and cycle indicator ≥ 2n+2 such that their projective limit is well defined and isomorphic to an infinite forest.
Alexander Poremba
ePrint Report
Quantum information has the property that measurement is an inherently destructive process. This feature is most apparent in the principle of complementarity, which states that mutually incompatible observables cannot be measured at the same time. Recent work by Broadbent and Islam (TCC 2020) builds on this aspect of quantum mechanics to realize a cryptographic notion called certified deletion. While this remarkable notion enables a classical verifier to be convinced that a (private-key) quantum ciphertext has been deleted by an untrusted party, it offers no additional layer of functionality.
In this work, we augment the proof-of-deletion paradigm with fully homomorphic encryption (FHE). This results in a new and powerful cryptographic notion called fully homomorphic encryption with certified deletion -- an interactive protocol which enables an untrusted quantum server to compute on encrypted data and, if requested, to simultaneously prove data deletion to a client. Our main technical ingredient is an interactive protocol by which a quantum prover can convince a classical verifier that a sample from the Learning with Errors (LWE) distribution in the form of a quantum state was deleted. We introduce an encoding based on Gaussian coset states which is highly generic and suggests that essentially any LWE-based cryptographic primitive admits a classically-verifiable quantum proof of deletion.
As an application of our protocol, we construct a Dual-Regev public-key encryption scheme with certified deletion, which we then extend towards a (leveled) FHE scheme of the same type. In terms of security, we distinguish between two types of attack scenarios: a semi-honest adversary that follows the protocol exactly, and a fully malicious adversary that is allowed to deviate arbitrarily from the protocol. In the former case, we achieve indistinguishable ciphertexts, even if the secret key is later revealed after deletion has taken place. In the latter case, we provide entropic uncertainty relations for Gaussian cosets which limit the adversary's ability to guess the delegated ciphertext once deletion has taken place. Our results enable a form of everlasting cryptography and give rise to new privacy-preserving quantum cloud applications, such as private machine learning on encrypted data with certified data deletion.
In this work, we augment the proof-of-deletion paradigm with fully homomorphic encryption (FHE). This results in a new and powerful cryptographic notion called fully homomorphic encryption with certified deletion -- an interactive protocol which enables an untrusted quantum server to compute on encrypted data and, if requested, to simultaneously prove data deletion to a client. Our main technical ingredient is an interactive protocol by which a quantum prover can convince a classical verifier that a sample from the Learning with Errors (LWE) distribution in the form of a quantum state was deleted. We introduce an encoding based on Gaussian coset states which is highly generic and suggests that essentially any LWE-based cryptographic primitive admits a classically-verifiable quantum proof of deletion.
As an application of our protocol, we construct a Dual-Regev public-key encryption scheme with certified deletion, which we then extend towards a (leveled) FHE scheme of the same type. In terms of security, we distinguish between two types of attack scenarios: a semi-honest adversary that follows the protocol exactly, and a fully malicious adversary that is allowed to deviate arbitrarily from the protocol. In the former case, we achieve indistinguishable ciphertexts, even if the secret key is later revealed after deletion has taken place. In the latter case, we provide entropic uncertainty relations for Gaussian cosets which limit the adversary's ability to guess the delegated ciphertext once deletion has taken place. Our results enable a form of everlasting cryptography and give rise to new privacy-preserving quantum cloud applications, such as private machine learning on encrypted data with certified data deletion.