## IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

#### 24 October 2021

###### Daniel J. Bernstein, Tanja Lange
ePrint Report
Spherical models of lattices are standard tools in the study of lattice-based cryptography, except for variations in terminology and minor details. Spherical models are used to predict the lengths of short vectors in lattices and the effectiveness of reduction modulo those short vectors. These predictions are consistent with an asymptotic theorem by Gauss, theorems on short vectors in almost all lattices from the invariant distribution, and a variety of experiments in the literature.

$S$-unit attacks are a rapidly developing line of attacks against structured lattice problems. These include the quantum polynomial-time attacks that broke the cyclotomic case of Gentry's original STOC 2009 FHE system under minor assumptions, and newer attacks that have broken through various barriers previously claimed for this line of work.

$S$-unit attacks take advantage of auxiliary lattices, standard number-theoretic lattices called $S$-unit lattices. Spherical models have recently been applied to these auxiliary lattices to deduce core limits on the power of $S$-unit attacks.

This paper shows that these models underestimate the power of $S$-unit attacks: $S$-unit lattices, like the lattice $Z^d$, have much shorter vectors and reduce much more effectively than predicted by these models. The attacker can freely choose $S$ to make the gap as large as desired, breaking through the core limits previously asserted for $S$-unit attacks.
###### Omri Shmueli
ePrint Report
Quantum money is a main primitive in quantum cryptography, that enables a bank to distribute to parties in the network, called wallets, unclonable quantum banknotes that serve as a medium of exchange between wallets. While quantum money suggests a theoretical solution to some of the fundamental problems in currency systems, it still requires a strong model to be implemented; quantum computation and a quantum communication infrastructure. A central open question in this context is whether we can have a quantum money scheme that uses "minimal quantumness", namely, local quantum computation and only classical communication.

Public-key semi-quantum money (Radian and Sattath, AFT 2019) is a quantum money scheme where the algorithm of the bank is completely classical, and quantum banknotes are publicly verifiable on any quantum computer. In particular, such scheme relies on local quantum computation and only classical communication. The only known construction of public-key semi-quantum is based on quantum lightning (Zhandry, EUROCRYPT 2019), which is based on a computational assumption that is now known to be broken.

In this work, we construct public-key semi-quantum money, based on quantum-secure indistinguishability obfuscation and the sub-exponential hardness of the Learning With Errors problem. The technical centerpiece of our construction is a new 3-message protocol, where a classical computer can delegate to a quantum computer the generation of a quantum state that is both, unclonable and publicly verifiable.
###### Théodore Conrad-Frenkiel, Rémi Géraud-Stewart, David Naccache
ePrint Report
This paper utilizes the techniques used by Regev \cite{DBLP:journals/jacm/Regev09} and Lyubashevsky, Peikert \& Regev in the security reduction of LWE and its algebraic variants \cite{DBLP:conf/eurocrypt/LyubashevskyPR13} to exhibit a quantum reduction from the decryption of NTRU to leaking information about the secret key. Since this reduction requires decryption with the same key one wishes to attack, it renders NTRU vulnerable to the same type of attacks that affect the Rabin--Williams scheme \cite{DBLP:conf/eurocrypt/Bernstein08} -- albeit requiring a quantum decryption query.

A common practice thwarting such attacks consists in applying the Fujisaki-Okamoto (FO, \cite{DBLP:conf/pkc/FujisakiO99}) transformation before encrypting. However, not all NTRU protocols enforce this protection. In particular the DPKE version of NTRU \cite{DBLP:conf/eurocrypt/SaitoXY18} is susceptible to such an attack.
###### Andrea Caforio, Daniel Collins, Ognjen Glamocanin, Subhadeep Banik
ePrint Report
Threshold Implementations have become a popular generic technique to construct circuits resilient against power analysis attacks. In this paper, we look to devise efficient threshold circuits for the lightweight block cipher family SKINNY. The only threshold circuits for this family are those proposed by its designers who decomposed the 8-bit S-box into four quadratic S-boxes, and constructed a 3-share byte-serial threshold circuit that executes the substitution layer over four cycles. In particular, we revisit the algebraic structure of the S-box and prove that it is possible to decompose it into (a) three quadratic S-boxes and (b) two cubic S-boxes. Such decompositions allow us to construct threshold circuits that require three shares and executes each round function in three cycles instead of four, and similarly circuits that use four shares requiring two cycles per round. Our constructions significantly reduce latency and energy consumption per encryption operation. Notably, to validate our designs, we synthesize our circuits on standard CMOS cell libraries to evaluate performance, and we conduct leakage detection via statistical tests on power traces on FPGA platforms to assess security.
###### Yang Wang, Yanmin Zhao, Mingqiang Wang
ePrint Report
Proxy re-encryption (PRE) schemes, which nicely solve the problem of delegating decryption rights, enable a semi-trusted proxy to transform a ciphertext encrypted under one key into a ciphertext of the same message under another arbitrary key. For a long time, the semantic security of PREs is quite similar to that of public key encryption (PKE) schemes. Cohen first pointed out the insufficiency of the security under chosen-plaintext attacks (CPA) of PREs in PKC 2019, and proposed a {\it{strictly stronger}} security notion, named security under honest re-encryption attacks (HRA), of PREs. Surprisingly, a few PREs satisfy the stronger HRA security and almost all of them are paring-based till now. To the best of our knowledge, we present the first detailed construction of HRA secure PREs based on standard LWE problems with {\it{comparable small and polynomially-bounded}} parameters in this paper. Combing known reductions, the HRA security of our PREs could also be guaranteed by the worst-case basic lattice problem (e.g. SIVP$_{\gamma}$). Our single-hop PRE schemes could be easily extended to the multi-hop case (as long as the maximum hop $L=O(1)$). Meanwhile, our single-hop PRE schemes are also key-private, which means that the implicit identities of a re-encryption key will not be revealed even in the case of a proxy colluding with some corrupted users. Some discussions about key-privacy of multi-hop PREs are also proposed, which indicates that several constructions of multi-hop PREs do not satisfy their key-privacy definitions.
###### Matteo Campanelli, Bernardo David, Hamidreza Khoshakhlagh, Anders Konring, Jesper Buus Nielsen
ePrint Report
A number of recent works have constructed cryptographic protocols with flavors of adaptive security by having a randomly-chosen anonymous committee run at each round. Since most of these protocols are stateful, transferring secret states from past committees to future, but still unknown, committees is a crucial challenge. Previous works have tackled this problem with approaches tailor-made for their specific setting, which mostly rely on using a blockchain to orchestrate auxiliary committees that aid in state hand-over process. In this work, we look at this challenge as an important problem on its own and initiate the study of Encryption to the Future (EtF) as a cryptographic primitive. First, we define a notion of a non-interactive EtF scheme where time is determined with respect to an underlying blockchain and a lottery selects parties to receive a secret message at some point in the future. While this notion seems overly restrictive, we establish two important facts: 1. if used to encrypt towards parties selected in the far future'', EtF implies witness encryption for NP over a blockchain; 2. if used to encrypt only towards parties selected in the near future'', EtF is not only sufficient for transferring state among committees as required by previous works but also captures previous tailor-made solutions. Inspired by these results, we provide a novel construction of EtF based on witness encryption over commitments (cWE), which we instantiate from a number of standard assumptions via a construction based on generic cryptographic primitives. Finally, we show how to lift near future'' EtF to far future'' EtF with a protocol based on an auxiliary committee whose communication complexity is independent from the length of plaintext messages being sent to the future.
###### Jan-Pieter D'Anvers, Daniel Heinz, Peter Pessl, Michiel van Beirendonck, Ingrid Verbauwhede
ePrint Report
Checking the equality of two arrays is a crucial building block of the Fujisaki-Okamoto transformation, and as such it is used in several post-quantum key encapsulation mechanisms including Kyber and Saber. While this comparison operation is easy to perform in a black box setting, it is hard to efficiently protect against side-channel attacks. For instance, the hash-based method by Oder et al. is limited to first-order masking, a higher-order method by Bache et al. was shown to be flawed, and a very recent higher-order technique by Bos et al. suffers in runtime. In this paper, we first demonstrate that the hash-based approach, and likely many similar first-order techniques, succumb to a relatively simple side-channel collision attack. We can successfully recover a Kyber512 key using just 6000 traces. While this does not break the security claims, it does show the need for efficient higher-order methods. We then present a new higher-order masked comparison algorithm based on the (insecure) higher-order method of Bache et al. Our new method is 4.2x, resp. 7.5x, faster than the method of Bos et al. for a 2nd, resp. 3rd, -order masking on the ARM Cortex-M4, and unlike the method of Bache et al., the new technique takes ciphertext compression into account, We prove correctness, security, and masking security in detail and provide performance numbers for 2nd and 3rd-order implementations. Finally, we verify our the side-channel security of our implementation using the test vector leakage assessment (TVLA) methodology.
ePrint Report
###### Guilherme Perin, Lichao Wu, Stjepan Picek
ePrint Report
One of the main promoted advantages of deep learning in profiling side-channel analysis is the possibility of skipping the feature engineering process. Despite that, most recent publications consider feature selection as the attacked interval from the side-channel measurements is pre-selected. This is similar to the worst-case security assumptions in security evaluations when the random secret shares (e.g., mask shares) are known during the profiling phase: an evaluator can identify points of interest locations and efficiently trim the trace interval. To broadly understand how feature selection impacts the performance of deep learning-based profiling attacks, this paper investigates four different feature selection scenarios that could be realistically used in practical security evaluations. The scenarios range from the minimum possible number of features to the whole available trace samples.

Our results emphasize that deep neural networks as profiling models show successful key recovery independently of explored feature selection scenarios against first-order masked software implementations of AES 128. Concerning the number of features, we found three main observations: 1) scenarios with less carefully selected point-of-interest and larger attacked trace intervals are the ones with better attack performance in terms of the required number of traces during the attack phase; 2) optimizing and reducing the number of features does not necessarily improve the chances to find good models from the hyperparameter search; and 3) in all explored feature selection scenarios, the random hyperparameter search always indicate a successful model with a single hidden layer for MLPs and two hidden layers for CNNs, which questions the reason for using complex models for the considered datasets. Our results demonstrate the key recovery with a single attack trace for all datasets for at least one of the feature selection scenarios.
###### Caspar Schwarz-Schilling, Joachim Neu, Barnabé Monnot, Aditya Asgaonkar, Ertem Nusret Tas, David Tse
ePrint Report
Recently, two attacks were presented against Proof-of-Stake (PoS) Ethereum: one where short-range reorganizations of the underlying consensus chain are used to increase individual validators' profits and delay consensus decisions, and one where adversarial network delay is leveraged to stall consensus decisions indefinitely. We provide refined variants of these attacks, considerably relaxing the requirements on adversarial stake and network timing, and thus rendering the attacks more severe. Combining techniques from both refined attacks, we obtain a third attack which allows an adversary with vanishingly small fraction of stake and no control over network message propagation (assuming instead probabilistic message propagation) to cause even long-range consensus chain reorganizations. Honest-but-rational or ideologically motivated validators could use this attack to increase their profits or stall the protocol, threatening incentive alignment and security of PoS Ethereum. The attack can also lead to destabilization of consensus from congestion in vote processing.
###### Hyesun Kwak, Dongwon Lee, Yongsoo Song, Sameer Wagh
ePrint Report
Homomorphic Encryption (HE), first demonstrated in 2009, is a class of encryption schemes that enables computation over encrypted data. Recent advances in the design of better protocols have led to the development of two different lines of HE schemes -- Multi-Party Homomorphic Encryption (MPHE) and Multi-Key Homomorphic Encryption (MKHE). These primitives cater to different applications as each approach has its own pros and cons. At a high level, MPHE schemes tend to be much more efficient but require the set of computing parties to be fixed throughout the entire operation, frequently a limiting assumption. On the other hand, MKHE schemes tend to have poor scaling (quadratic) with the number of parties but allow us to add new parties to the joint computation anytime since they support computation between ciphertexts under different keys.

In this work, we formalize a new variant of HE called Multi-Group Homomorphic Encryption (MGHE). Stated informally, an MGHE scheme provides a seamless integration between MPHE and MKHE, thereby enjoying the best of both worlds. In this framework, a group of parties generates a public key jointly which results in the compactness of ciphertexts and the efficiency of homomorphic operations similar to MPHE. However, unlike MPHE, it also supports computations on encrypted data under different keys similar to MKHE.

We provide the first construction of such an MGHE scheme from BFV and demonstrate experimental results. More importantly, the joint public key generation procedure of our scheme is fully non-interactive so that the set of computing parties does not have to be determined and no information about other parties is needed in advance of individual key generation. At the heart of our construction is a novel re-factoring of the relinearization key.
###### Long Meng, Liqun Chen
ePrint Report
Time-stamping services produce time-stamp tokens as evidence to prove that digital data existed at given points in time. Time-stamp tokens contain verifiable cryptographic bindings between data and time, which are produced using cryptographic algorithms. In the ANSI, ISO/IEC and IETF standards for time-stamping services, cryptographic algorithms are addressed in two aspects: (i) Client-side hash functions used to hash data into digests for nondisclosure. (ii) Server-side algorithms used to bind the time and digests of data. These algorithms are associated with limited lifespans due to their operational life cycles and increasing computational powers of attackers. After the algorithms are compromised, time-stamp tokens using the algorithms are no longer trusted. The ANSI and ISO/IEC standards provide renewal mechanisms for time-stamp tokens. However, the renewal mechanisms for client-side hash functions are specified ambiguously, that may lead to the failure of implementations. Besides, in existing papers, the security analyses of long-term time-stamping schemes only cover the server-side renewal, and the client-side renewal is missing. In this paper, we analyse the necessity of client-side renewal, and propose a comprehensive long-term time-stamping scheme that addresses both client-side renewal and server-side renewal mechanisms. After that, we formally analyse and evaluate the client-side security of our proposed scheme.