International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

27 December 2022

Shaza Elsharief, Lilas Alrahis, Johann Knechtel, Ozgur Sinanoglu
ePrint Report ePrint Report
Logic locking/obfuscation secures hardware designs from untrusted entities throughout the globalized semiconductor supply chain. Machine learning (ML) recently challenged the security of locking: such attacks successfully captured the locking-induced, structural design modifications to decipher obfuscated gates. Although routing obfuscation eliminates this threat, more recent attacks exposed new vulnerabilities, like link formation, breaking such schemes. Thus, there is still a need for advanced, truly learning-resilient locking solutions.

Here we propose IsoLock, a provably-secure locking scheme that utilizes isomorphic structures which ML models and other structural methods cannot discriminate. Unlike prior work, IsoLock’s security promise neither relies on re-synthesis nor on dedicated sub-circuits. Instead, IsoLock introduces isomorphic key-gate structures within the design via systematic routing obfuscation. We theoretically prove the security of IsoLock against modeling attacks. Further, we lock ISCAS-85 and ITC-99 benchmarks and launch state-of-the-art ML attacks, SCOPE and MuxLink, as well as the Redundancy and SAAM attacks, which only decipher an average of 0–6% of the key, well confirming the resilience of IsoLock. All in all, IsoLock is proposed to break the cycle of “cat and mouse” in locking and attack studies, through a provably-secure locking approach against structural ML attacks.
Expand
Maxime Bombar, Alain Couvreur, Thomas Debris-Alazard
ePrint Report ePrint Report
Code-based cryptography relies, among other things, on the hardness of the Decoding Problem. If the search version is well understood, both from practical and theoretical standpoints, the decision version has been less studied in the literature. Until the work of Brakerski et al. (Eurocrypt 2019), no worst-case to average-case reduction was even known, and most reductions focus on the plain unstructured Decoding Problem. In the world of Euclidean lattices though, the situation is rather different and many reductions exist, both for unstuctured and structured versions of the underlying problems. For the latter versions, a powerful tool called the O(H)CP framework has been introduced by Peikert et al. (STOC 2017) and has proved itself very useful as a black box inside reductions. In this work, we adapt this technique to the coding-theoretic setting, and give a new worst-case hardness of the decision Decoding Problem, as well as a new (average-case) search-to-decision reduction. We then turn to the structured version and explain why this is not as straightforward as for Euclidean lattices. If we fail to give a search-to-decision reduction in this case, we believe that our work opens the way towards new reductions for structured codes given that the O(H)CP framework proved itself to be so powerful in lattice–based cryptography. Furthermore, we also believe that this technique could be extended to codes endowed with other metrics, such as the rank metric, for which no search-to-decision reduction is known.
Expand
Kevin Carrier, Yixin Shen, Jean-Pierre Tillich
ePrint Report ePrint Report
We present a faster dual lattice attack on the Learning with Errors (LWE) problem, based on ideas from coding theory. Basically, it consists of revisiting the most recent dual attack of \cite{Matzov22} and replacing modulus switching by a decoding algorithm. This replacement achieves a reduction from small LWE to plain LWE with a very significant reduction of the secret dimension. We also replace the enumeration part of this attack by betting that the secret is zero on the part where we want to enumerate it and iterate this bet over other choices of the enumeration part. We estimate the complexity of this attack by making the optimistic, but realistic guess that we can use polar codes for this decoding task. We show that under this assumption the best attacks on Kyber and Saber can be improved by 1 and 6 bits.
Expand
Paolo Santini, Marco Baldi, Franco Chiaraluce
ePrint Report ePrint Report
The Permuted Kernel Problem (PKP) asks to find a permutation which makes an input matrix an element of the kernel of some given vector space. The literature exhibits several works studying its hardness in the case of the input matrix being mono-dimensional (i.e., a vector), while the multi-dimensional case has received much less attention and, de facto, only the case of a binary ambient finite field has been studied. The Subcode Equivalence Problem (SEP), instead, asks to find a permutation so that a given linear code becomes a subcode of another given code. At the best of our knowledge, no algorithm to solve the SEP has ever been proposed. In this paper we study the computational hardness of solving these problems. We first show that, despite going by different names, PKP and SEP are exactly the same problem. Then we consider the state-of-the-art solver for the mono-dimensional PKP (namely, the KMP algorithm), generalize it to the multi-dimensional case and analyze both the finite and the asymptotic regimes. We further propose a new algorithm, which can be thought of as a refinement of KMP. In the asymptotic regime our algorithm becomes slower than existing solutions but, in the finite regime (and for parameters of practical interest), it achieves competitive performances. As an evidence, we show that it is the fastest algorithm to attack several recommended instances of cryptosystems based on PKP. As a side-effect, given the already mentioned equivalence between PKP and SEP, all the algorithms we analyze in this paper can be used to solve instances of this latter problem.
Expand

25 December 2022

Pascal Lafourcade, Gael Marcadet, Léo Robert
ePrint Report ePrint Report
The verification of computations performed by an untrusted server is a cornerstone for delegated computations, especially in multi- clients setting where inputs are provided by different parties. As- suming a common secret between clients, a garbled circuit offers the attractive property to ensure the correctness of a result computed by the untrusted server while keeping the input and the function private. Yet, this verification can be guaranteed only once. Based on the notion of multi-key homomorphic encryption (MKHE), we propose RMC-PVC a multi-client verifiable computation proto- col, able to verify the correctness of computations performed by an untrusted server for inputs (encoded for a garbled circuit) provided by multiple clients. Thanks to MKHE, the garbled circuit is reusable an arbitrary number of times. In addition, each client can verify the computation by its own. Compared to a single-key FHE scheme, the MKHE usage in RMC-PVC allows to reduce the workload of the server and thus the response delay for the client. It also enforce the privacy of inputs, which are provided by different clients.
Expand
Adithya Vadapalli, Ryan Henry, Ian Goldberg
ePrint Report ePrint Report
We design, analyze, and implement Duoram, a fast and bandwidth-efficient distributed ORAM protocol suitable for secure 2- and 3-party computation settings. Following Doerner and shelat's Floram construction (CCS 2017), Duoram leverages (2,2)-distributed point functions (DPFs) to represent PIR and PIR-writing queries compactly—but with a host of innovations that yield massive asymptotic reductions in communication cost and notable speedups in practice, even for modestly sized instances. Specifically, Duoram introduces a novel method for evaluating dot products of certain secret-shared vectors using communication that is only logarithmic in the vector length. As a result, for memories with $n$ addressable locations, Duoram can perform a sequence of $m$ arbitrarily interleaved reads and writes using just $O(m\lg n)$ words of communication, compared with Floram's $O(m \sqrt{n})$ words. Moreover, most of this work can occur during a data-independent preprocessing phase, leaving just $O(m)$ words of online communication cost for the sequence—i.e., a constant online communication cost per memory access.
Expand
Francisco Blas Izquierdo Riera, Magnus Almgren, Pablo Picazo-Sanchez, Christian Rohner
ePrint Report ePrint Report
Password security relies heavily on the choice of password by the user but also on the one-way hash functions used to protect stored passwords. To compensate for the increased computing power of attackers, modern password hash functions like Argon2, have been made more complex in terms of computational power and memory requirements. Nowadays, the computation of such hash functions is performed usually by the server (or authenticator) instead of the client. Therefore, constrained Internet of Things devices cannot use such functions when authenticating users. Additionally, the load of computing such functions may expose servers to denial of service attacks. In this work, we discuss client-side hashing as an alternative. We propose Clipaha, a client-side hashing scheme that allows using high-security password hashing even on highly constrained server devices. Clipaha is robust to a broader range of attacks compared to previous work and covers important and complex usage scenarios. Our evaluation discusses critical aspects involved in client-side hashing. We also provide an implementation of Clipaha in the form of a web library and benchmark the library on different systems to understand its mixed JavaScript and WebAssembly approach's limitations. Benchmarks show that our library is 50\% faster than similar libraries and can run on some devices where previous work fails.
Expand
Aggelos Kiayias, Feng-Hao Liu, Yiannis Tselekounis
ePrint Report ePrint Report
$\ell$-more extractable hash functions were introduced by Kiayias et al. (CCS '16) as a strengthening of extractable hash functions by Goldwasser et al. (Eprint '11) and Bitansky et al. (ITCS '12, Eprint '14). In this work, we define and study an even stronger notion of leakage-resilient $\ell$-more extractable hash functions, and instantiate the notion under the same assumptions used by Kiayias et al. and Bitansky et al. In addition, we prove that any hash function that can be modeled as a Random Oracle (RO) is leakage resilient $\ell$-more extractable, while it is however, not extractable according to the definition by Goldwasser et al. and Bitansky et al., showing a separation of the notions.

We show that this tool has many interesting applications to non-malleable cryptography. Particularly, we can derive efficient, continuously non-malleable, leakage-resilient codes against split-state attackers (TCC '14), both in the CRS and the RO model. Additionally, we can obtain succinct non-interactive non-malleable commitments both in the CRS and the RO model, satisfying a stronger definition than the prior ones by Crescenzo et al. (STOC '98), and Pass and Rosen (STOC '05), in the sense that the simulator does not require access to the original message, while the attacker's auxiliary input is allowed to depend on it.
Expand
Thomas Debris-Alazard, Nicolas Resch
ePrint Report ePrint Report
In this work, we consider the worst and average case hardness of the decoding problems that are the basis for code-based cryptography. By a decoding problem, we consider inputs of the form $(\mathbf{G}, \mathbf{m} \mathbf{G} + \mathbf{t})$ for a matrix $\mathbf{G}$ that generates a code and a noise vector $\mathbf{t}$, and the algorithm's goal is to recover $\mathbf{m}$. We consider a natural strategy for creating a reduction to an average-case problem: from our input we simulate a Learning Parity with Noise (LPN) oracle, where we recall that LPN is essentially an average-case decoding problem where there is no a priori lower bound on the rate of the code. More formally, the oracle $\mathcal{O}_{\mathbf{x}}$ outputs independent samples of the form $\langle \mathbf{x}, \mathbf{a} \rangle + e$, where $\mathbf{a}$ is a uniformly random vector and $e$ is a noise bit. Such an approach is (implicit in) the previous worst-case to average-case reductions for coding problems (Brakerski et al Eurocrypt 2019, Yu and Zhang CRYPTO 2021). To analyze the effectiveness of this reduction, we use a smoothing bound derived recently by (Debris-Alazard et al IACR Eprint 2022), which quantifies the simulation error of this reduction. It is worth noting that this latter work crucially use a bound, known as the second linear programming bounds, on the weight distribution of the code generated here by $\mathbf{G}$. Our approach, which is Fourier analytic in nature, applies to any smoothing distribution (so long as it is radial); for our purposes, the best choice appears to be Bernoulli (although for the analysis it is most effective to study the uniform distribution over a sphere, and subsequently translate the bound back to the Bernoulli distribution by applying a truncation trick). Our approach works naturally when reducing from a worst-case instance, as well as from an average-case instance. While we are unable to improve the parameters of the worst-case to average-case reductions of Brakerski et al or Yu and Zhang, we think that our work highlights two important points. Firstly, in analyzing the average-case to average-case reduction we run into inherent limitations of this reduction template. Essentially, it appears hopeless to reduce to an LPN instance for which the noise rate is more than inverse-polynomially biased away from uniform. We furthermore uncover a surprising weakness in the second linear programming bound: we observe that it is essentially useless for the regime of parameters where the rate of the code is inverse polynomial in the block-length. By highlighting these shortcomings, we hope to stimulate the development of new techniques for reductions between cryptographic decoding problems.
Expand
Dario Fiore, Lydia Garms, Dimitris Kolonelos, Claudio Soriente, Ida Tucker
ePrint Report ePrint Report
Anonymous authentication primitives, e.g., group or ring signatures, allow one to realize privacy-preserving data collection applications, as they strike a balance between authenticity of data being collected and privacy of data providers. At PKC 2021, Diaz and Lehmann defined group signatures with User-Controlled Linkability (UCL) and provided an instantiation based on BBS+ signatures. In a nutshell, a signer of a UCL group signature scheme can link any of her signatures: linking evidence can be produced at signature time, or after signatures have been output, by providing an explicit linking proof. In this paper, we introduce Ring Signatures with User-Controlled Linkability (RS-UCL). Compared to group signatures with user-controlled linkability, RS-UCL require no group manager and can be instantiated in a completely decentralized manner. We also introduce a variation, User Controlled and Autonomous Linkability (RS-UCAL), which gives the user full control of the linkability of their signatures. We provide a formal model for both RS-UCL and RS-UCAL and introduce a compiler that can upgrade any ring signature scheme to RS-UCAL. The compiler leverages a new primitive we call Anonymous Key Randomizable Signatures (AKRS) — a signature scheme where the verification key can be randomized — that can be of independent interest. We also provide different instantiations of AKRS based on Schnorr signatures and on lattices. Finally, we show that an AKRS scheme can additionally be used to construct an RS-UCL scheme.
Expand
Lih-Chung Wang, Po-En Tseng, Yen-Liang Kuan, Chun-Yen Chou
ePrint Report ePrint Report
In this paper, we propose a simple noncommutative-ring based UOV signature scheme with key-randomness alignment: Simple NOVA, which can be viewed as a simplified version of NOVA[48]. We simplify the design of NOVA by skipping the perturbation trick used in NOVA, thus shortens the key generation process and accelerates the signing and verification. Together with a little modification accordingly, this alternative version of NOVA is also secure and may be more suitable for practical uses. We also use Magma to actually implement and give a detailed security analysis against known major attacks.
Expand
Bhuvnesh Chaturvedi, Anirban Chakraborty, Ayantika Chatterjee, Debdeep Mukhopadhyay
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) allows computations on encrypted data without the need for decryption. Therefore, in the world of cloud computing, FHE provides an essential means for users to garner different computational services from potentially untrusted servers while keeping sensitive data private. In such a context, the security and privacy guarantees of well-known FHE schemes become paramount. In a research article, we (Chaturvedi et al., ePrint 2022/1563) have shown that popular FHE schemes like TFHE and FHEW are vulnerable to CVO (Ciphertext Verification Oracle) attacks, which belong to the family of “reaction attacks” [6]. We show, for the first time, that feedback from the client (user) can be craftily used by the server to extract the error (noise) associated with each computed ciphertext. Once the errors for some m ciphertext (m > n, where n = key size) are retrieved, the original secret key can be trivially leaked using the standard Gaussian Elimination method. The results in the paper (Chaturvedi et al., ePrint 2022/1563) show that FHE schemes should be subjected to further security evaluations, specifically in the context of system-wide implementation, such that CVO-based attacks can be eliminated. Quite recently, Michael Walter published a document (ePrint 2022/1722), claiming that the timing channel we used in our work (Chaturvedi et al., ePrint 2022/1563) “are false”. In this document, we debunk this claim and explain how we use the timing channel to improve the CVO attack. We explain that the CVO-based attack technique we proposed in the paper (Chaturvedi et al., ePrint 2022/1563) is a result of careful selection of perturbation values and the first work in literature that showed reaction based attacks are possible in the context of present FHE schemes in a realistic cloud setting. We further argue that for an attacker, any additional information that can aid a particular attack shall be considered as leakage and must be dealt with due importance to stymie the attack.
Expand

19 December 2022

Markus Krausz, Georg Land, Jan Richter-Brockmann, Tim Güneysu
ePrint Report ePrint Report
The sampling of polynomials with fixed weight is a procedure required by all remaining round-4 Key Encapsulation Mechanisms (KEMs) for Post-Quantum Cryptography (PQC) standardization (BIKE, HQC, McEliece) as well as NTRU, Streamlined NTRU Prime, and NTRU LPRrime. Recent attacks have shown that side-channel leakage of sampling methods can be practically exploited for key recoveries. While countermeasures regarding such timing attacks have already been presented, still, there is no comprehensive work covering solutions that are also secure against power side-channels. Aiming to close this important gap, the contribution of our work is threefold: First, we analyze requirements for the different use cases of fixed weight sampling. Second, we demonstrate how all known sampling methods can be implemented securely against timing and power/EM side-channels and propose performance enhancing modifications. Furthermore, we propose a new, comparison-based methodology that outperforms existing methods in the masked setting for the three round-4 KEMs BIKE, HQC, and McEliece. Third, we present bitsliced and arbitrary-order masked software implementations and benchmarked them for all relevant cryptographic schemes to be able to infer recommendations for each use case. Additionally, we provide a hardware implementation of our new method as a case study, and analyze the feasibility of implementing the other approaches in hardware.
Expand
Alexandra Babueva, Liliya Akhmetzyanova, Evgeny Alekseev, Oleg Taraskin
ePrint Report ePrint Report
Blind signature schemes are the essential element of many complex information systems such as e-cash and e-voting systems. They should provide two security properties: unforgeability and blindness. The former one is standard for all signature schemes and ensures that a valid signature can be generated only during the interaction with the secret signing key holder. The latter one is more specific for this class of signature schemes and means that there is no way to link a (message, signature) pair to the certain execution of the signing protocol. In the current paper we discuss the blindness property and various security notions formalizing this property. We analyze several ElGamal-type blind signature schemes regarding blindness. We present effective attacks violating blindness on three schemes. All the presented attacks may be performed by any external observer and do not require signing key knowledge. One of the schemes conceivably became broken due to an incorrect understanding of blindness property.
Expand
Julien Béguinot, Wei Cheng, Sylvain Guilley, Yi Liu, Loïc Masure, Olivier Rioul, François-Xavier Standaert
ePrint Report ePrint Report
At Eurocrypt 2015, Duc et al. conjectured that the success rate of a side-channel attack targeting an intermediate computation encoded in a linear secret-sharing, a.k.a masking with \(d+1\) shares, could be inferred by measuring the mutual information between the leakage and each share separately. This way, security bounds can be derived without having to mount the complete attack. So far, the best proven bounds for masked encodings were nearly tight with the conjecture, up to a constant factor overhead equal to the field size, which may still give loose security guarantees compared to actual attacks. In this paper, we improve upon the state-of-the-art bounds by removing the field size loss, in the cases of Boolean masking and arithmetic masking modulo a power of two. As an example, when masking in the AES field, our new bound outperforms the former ones by a factor \(256\). Moreover, we provide theoretical hints that similar results could hold for masking in other fields as well.
Expand
Azade Rezaeezade, Lejla Batina
ePrint Report ePrint Report
Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the side-channel domain. Regularization techniques are popular solutions to overfitting and have long been used in various domains. At the same time, the works in the side-channel domain show sporadic utilization of regularization techniques. What is more, no systematic study investigates these techniques' effectiveness. In this paper, we aim to investigate the regularization effectiveness by applying four powerful and easy-to-use regularization techniques to six combinations of datasets, leakage models, and deep-learning topologies. The investigated techniques are $L_1$, $L_2$, dropout, and early stopping. Our results show that while all these techniques can improve performance in many cases, $L_1$ and $L_2$ are the most effective. Finally, if training time matters, early stopping is the best technique to choose.
Expand
Maria Corte-Real Santos, Craig Costello, Sam Frengley
ePrint Report ePrint Report
We develop an efficient algorithm to detect whether a superspecial genus 2 Jacobian is optimally $(N, N)$-split for each integer $N \leq 11$. Incorporating this algorithm into the best-known attack against the superspecial isogeny problem in dimension 2 gives rise to significant cryptanalytic improvements. Our implementation shows that when the underlying prime $p$ is 100 bits, the attack is sped up by a factor $25{\tt x}$; when the underlying prime is 200 bits, the attack is sped up by a factor $42{\tt x}$; and, when the underlying prime is 1000 bits, the attack is sped up by a factor $160{\tt x}$.
Expand
Xianrui Qin, Shimin Pan, Arash Mirzaei, Zhimei Sui, Oğuzhan Ersoy, Amin Sakzad, Muhammed F. Esgin, Joseph K. Liu, Jiangshan Yu, Tsz Hon Yuen
ePrint Report ePrint Report
Payment Channel Hub (PCH) is a promising solution to the scalability issue of first-generation blockchains or cryptocurrencies such as Bitcoin. It supports off-chain payments between a sender and a receiver through an intermediary (called the tumbler). Relationship anonymity and value privacy are desirable features of privacy-preserving PCHs, which prevent the tumbler from identifying the sender and receiver pairs as well as the payment amounts. To our knowledge, all existing Bitcoin-compatible PCH constructions that guarantee relationship anonymity allow only a (predefined) fixed payment amount. Thus, to achieve payments with different amounts, they would require either multiple PCH systems or running one PCH system multiple times. Neither of these solutions would be deemed practical.

In this paper, we propose the first Bitcoin-compatible PCH that achieves relationship anonymity and supports variable amounts for payment. To achieve this, we have several layers of technical constructions, each of which could be of independent interest to the community. First, we propose $\textit{BlindChannel}$, a novel bi-directional payment channel protocol for privacy-preserving payments, where {one of the channel parties} is unable to see the channel balances. Then, we further propose $\textit{BlindHub}$, a three-party (sender, tumbler, receiver) protocol for private conditional payments, where the tumbler pays to the receiver only if the sender pays to the tumbler. The appealing additional feature of BlindHub is that the tumbler cannot link the sender and the receiver while supporting a variable payment amount. To construct BlindHub, we also introduce two new cryptographic primitives as building blocks, namely $\textit{Blind Adaptor Signature}$(BAS), and $\textit{Flexible Blind Conditional Signature}$. BAS is an adaptor signature protocol built on top of a blind signature scheme. Flexible Blind Conditional Signature is a new cryptographic notion enabling us to provide an atomic and privacy-preserving PCH. Lastly, we instantiate both BlindChannel and BlindHub protocols and present implementation results to show their practicality.
Expand
Thomas Peyrin, Quan Quan Tan
ePrint Report ePrint Report
Cryptanalysts have been looking for differential characteristics in ciphers for decades and it remains unclear how the subkey values and more generally the Markov assumption impacts exactly their probability estimation. There were theoretical efforts considering some simple linear relationships between differential characteristics and subkey values, but the community has not yet explored many possible nonlinear dependencies one can find in differential characteristics. Meanwhile, the overwhelming majority of cryptanalysis works still assume complete independence between the cipher rounds. We give here a practical framework and a corresponding tool to investigate all such linear or nonlinear effects and we show that they can have an important impact on the security analysis of many ciphers. Surprisingly, this invalidates many differential characteristics that appeared in the literature in the past years: we have checked differential characteristics from 8 articles (4 each for both SKINNY and GIFT) and most of these published paths are impossible or working only for a very small proportion of the key space. We applied our method to SKINNY and GIFT, but we expect more impossibilities for other ciphers. To showcase our advances in the dependencies analysis, in the case of SKINNY we are able to obtain a more accurate probability distribution of a differential characteristic with respect to the keys (with practical verification when it is computationally feasible). Our work indicates that newly proposed differential characteristics should now come with an analysis of how the key values and the Markov assumption might or might not affect/invalidate them. In this direction, more constructively, we include a proof of concept of how one can incorporate additional constraints into Constraint Programming so that the search for differential characteristics can avoid (to a large extent) differential characteristics that are actually impossible due to dependency issues our tool detected.
Expand
Benoît Libert, Alain Passelègue, Mahshid Riahinia
ePrint Report ePrint Report
Non-committing encryption (NCE) is an advanced form of public-key encryption which guarantees the security of a Multi-Party Computation (MPC) protocol in the presence of an adaptive adversary. Brakerski et al. (TCC 2020) recently proposed an intermediate notion, termed Packed Encryption with Partial Equivocality (PEPE), which implies NCE and preserves the ciphertext rate (up to a constant factor). In this work, we propose three new constructions of rate-1 PEPE based on standard assumptions. In particular, we obtain the first constant ciphertext-rate NCE construction from the LWE assumption with polynomial modulus, and from the Subgroup Decision assumption. We also propose an alternative DDH-based construction with guaranteed polynomial running time.
Expand
◄ Previous Next ►