IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
23 September 2022
Lorenzo Grassi
In this paper, we re-investigate the Lai-Massey scheme, originally proposed in the cipher IDEA. Due to the similarity with the Feistel schemes, and due to the existence of invariant subspace attacks as originally pointed out by Vaudenay at FSE 1999, the Lai-Massey scheme has received only little attention by the community. As first contribution, we propose new generalizations of such scheme that are not (affine) equivalent to any generalized Feistel scheme proposed in the literature so far. Then, inspired by the recent Horst construction, we propose the Amaryllises construction as a generalization of the Lai-Massey scheme, in which the linear combination in the Lai-Massey scheme is replaced by a non-linear one. Besides proposing concrete examples of the Amaryllises construction, we discuss its (possible) advantages and disadvantages with respect to other existing schemes/constructions published in the literature, with particular attention on the Lai-Massey one and on the Horst one.
19 September 2022
Yu Long Chen
Constructions based on two public permutation calls are very common in today’s cryptographic community. However, each time a new construction is introduced, a dedicated proof must be carried out to study the security of the construction. In this work, we propose a new tool to analyze the security of these constructions in a modular way. This tool is built on the idea of the classical mirror theory for block cipher based constructions, such that it can be used for security proofs in the ideal permutation model. We present different variants of this public permutation mirror theory such that it is suitable for different security notions.
We also present a framework to use the new techniques, which provides the bad events that need to be excluded in order to apply the public permutation mirror theory. Furthermore, we showcase the new technique on three examples: the Tweakable Even-Mansour cipher by Cogliati et al. (CRYPTO ’15), the two permutation variant of the pEDM PRF by Dutta et al. (ToSC ’21(2)), and the two permutation variant of the nEHtM\(_p\) MAC algorithm by Dutta and Nandi (AFRICACRYPT ’20). With this new tool we prove the multi-user security of these constructions in a considerably simplified way.
We also present a framework to use the new techniques, which provides the bad events that need to be excluded in order to apply the public permutation mirror theory. Furthermore, we showcase the new technique on three examples: the Tweakable Even-Mansour cipher by Cogliati et al. (CRYPTO ’15), the two permutation variant of the pEDM PRF by Dutta et al. (ToSC ’21(2)), and the two permutation variant of the nEHtM\(_p\) MAC algorithm by Dutta and Nandi (AFRICACRYPT ’20). With this new tool we prove the multi-user security of these constructions in a considerably simplified way.
Hanno Becker, Matthias J. Kannwischer
This paper presents two new techniques for the fast implementation of the Keccak permutation on the A-profile of the Arm architecture: First, the elimination of explicit rotations in the Keccak permutation through Barrel shifting, applicable to scalar AArch64 implementations of Keccak-f1600. Second, the construction of hybrid implementations concurrently leveraging both the scalar and the Neon instruction sets of AArch64. The resulting performance improvements are demonstrated in the example of the hash-based signature scheme SPHINCS+, one of the recently announced winners of the NIST post-quantum cryptography project: We achieve up to 1.89× performance improvements compared to the state of the art. Our implementations target the Arm Cortex-{A55,A510,A78,A710,X1,X2} processors common in client devices such as mobile phones.
Amos Treiber, Dirk Müllmann, Thomas Schneider, Indra Spiecker genannt Döhmann
Pushes for increased power of Law Enforcement (LE) for data retention and centralized storage result in legal challenges with data protection law and courts - and possible violations of the right to privacy. This is motivated by a desire for better cooperation and exchange between LE Agencies (LEAs), which is difficult due to data protection regulations, was identified as a main factor of major public security failures, and is a frequent criticism of LE.
Secure Multi-Party Computation (MPC) is often seen as a technological means to solve privacy conflicts where actors want to exchange and analyze data that needs to be protected due to data protection laws. In this interdisciplinary work, we investigate the problem of private information exchange between LEAs from both a legal and technical angle. We give a legal analysis of secret-sharing based MPC techniques in general and, as a particular application scenario, consider the case of matching LE databases for lawful information exchange between LEAs. We propose a system for lawful information exchange between LEAs using MPC and private set intersection and show its feasibility by giving a legal analysis for data protection and a technical analysis for workload complexity. Towards practicality, we present insights from qualitative feedback gathered within exchanges with a major European LEA.
Secure Multi-Party Computation (MPC) is often seen as a technological means to solve privacy conflicts where actors want to exchange and analyze data that needs to be protected due to data protection laws. In this interdisciplinary work, we investigate the problem of private information exchange between LEAs from both a legal and technical angle. We give a legal analysis of secret-sharing based MPC techniques in general and, as a particular application scenario, consider the case of matching LE databases for lawful information exchange between LEAs. We propose a system for lawful information exchange between LEAs using MPC and private set intersection and show its feasibility by giving a legal analysis for data protection and a technical analysis for workload complexity. Towards practicality, we present insights from qualitative feedback gathered within exchanges with a major European LEA.
George Teseleanu, Paul Cotan
Let $N=pq$ be the product of two balanced prime numbers $p$ and $q$. Murru and Saettone presented in 2017 an interesting RSA-like cryptosystem that uses the key equation $ed - k (p^2+p+1)(q^2+q+1) = 1$, instead of the classical RSA key equation $ed - k (p-1)(q-1) = 1$. The authors claimed that their scheme is immune to Wiener's continued fraction attack. Unfortunately, Nitaj \emph{et. al.} developed exactly such an attack. In this paper, we introduce a family of RSA-like encryption schemes that uses the key equation $ed - k [(p^n-1)(q^n-1)]/[(p-1)(q-1)] = 1$, where $n>1$ is an integer. Then, we show that regardless of the choice of $n$, there exists an attack based on continued fractions that recovers the secret exponent.
George Teseleanu
We present two simple zero knowledge interactive proofs that can be instantiated with many of the standard decisional or computational hardness assumptions. Compared with traditional zero knowledge proofs, in our protocols the verifiers starts first, by emitting a challenge, and then the prover answers the challenge.
Jun Xu, Santanu Sarkar, Huaxiong Wang, Lei Hu
Elliptic Curve Hidden Number Problem (EC-HNP) was first introduced by Boneh, Halevi and Howgrave-Graham at Asiacrypt 2001. To rigorously assess the bit security of the Diffie--Hellman key exchange with elliptic curves (ECDH), the Diffie--Hellman variant of EC-HNP, regarded as an elliptic curve analogy of the Hidden Number Problem (HNP), was presented at PKC 2017. This variant can also be used for practical cryptanalysis of ECDH key exchange in the situation of side-channel attacks.
In this paper, we revisit the Coppersmith method for solving the involved modular multivariate polynomials in the Diffie--Hellman variant of EC-HNP and demonstrate that, for any given positive integer $d$, a given sufficiently large prime $p$, and a fixed elliptic curve over the prime field $\mathbb{F}_p$, if there is an oracle that outputs about $\frac{1}{d+1}$ of the most (least) significant bits of the $x$-coordinate of the ECDH key, then one can give a heuristic algorithm to compute all the bits within polynomial time in $\log_2 p$. When $d>1$, the heuristic result $\frac{1}{d+1}$ significantly outperforms both the rigorous bound $\frac{5}{6}$ and heuristic bound $\frac{1}{2}$. Due to the heuristics involved in the Coppersmith method, we do not get the ECDH bit security on a fixed curve. However, we experimentally verify the effectiveness of the heuristics on NIST curves for small dimension lattices.
In this paper, we revisit the Coppersmith method for solving the involved modular multivariate polynomials in the Diffie--Hellman variant of EC-HNP and demonstrate that, for any given positive integer $d$, a given sufficiently large prime $p$, and a fixed elliptic curve over the prime field $\mathbb{F}_p$, if there is an oracle that outputs about $\frac{1}{d+1}$ of the most (least) significant bits of the $x$-coordinate of the ECDH key, then one can give a heuristic algorithm to compute all the bits within polynomial time in $\log_2 p$. When $d>1$, the heuristic result $\frac{1}{d+1}$ significantly outperforms both the rigorous bound $\frac{5}{6}$ and heuristic bound $\frac{1}{2}$. Due to the heuristics involved in the Coppersmith method, we do not get the ECDH bit security on a fixed curve. However, we experimentally verify the effectiveness of the heuristics on NIST curves for small dimension lattices.
Ping Wang, Yiting Su, Fangguo Zhang
Bit commitment (BC) is one of the most important fundamental protocols in secure multi-party computation. However, it is generally believed that unconditionally secure bit commitment is impossible even with quantum resources. In this paper, we design a secure non-interactive bit commitment protocol by exploiting the no-communication theorem of the quantum entangled states, whose security relies on the indistinguishability of whether the Bell states are measured or not. The proposed quantum bit commitment (QBC) is secure against classical adversaries with unlimited computing power, and the probability of a successful attack by quantum adversaries decreases exponentially as $n$ (the number of qubits in a group) increases.
Alexander Bienstock, Yevgeniy Dodis, Sanjam Garg, Garrison Grogan, Mohammad Hajiabadi, Paul Rösler
Continuous Group Key Agreement (CGKA) is the basis of modern Secure Group Messaging (SGM) protocols. At a high level, a CGKA protocol enables a group of users to continuously compute a shared (evolving) secret while members of the group add new members, remove other existing members, and perform state updates. The state updates allow CGKA to offer desirable security features such as forward secrecy and post-compromise security.
CGKA is regarded as a practical primitive in the real-world. Indeed, there is an IETF Messaging Layer Security (MLS) working group devoted to developing a standard for SGM protocols, including the CGKA protocol at their core. Though known CGKA protocols seem to perform relatively well when considering natural sequences of performed group operations, there are no formal guarantees on their efficiency, other than the $O(n)$ bound which can be achieved by trivial protocols, where $n$ is the number of group numbers. In this context, we ask the following questions and provide negative answers.
1. Can we have CGKA protocols that are efficient in the worst case? We start by answering this basic question in the negative. First, we show that a natural primitive that we call Compact Key Exchange (CKE) is at the core of CGKA, and thus tightly captures CGKA's worst-case communication cost. Intuitively, CKE requires that: first, $n$ users non-interactively generate key pairs and broadcast their public keys, then, some other special user securely communicates to these $n$ users a shared key. Next, we show that CKE with communication cost $o(n)$ by the special user cannot be realized in a black-box manner from public-key encryption, thus implying the same for CGKA, where $n$ is the corresponding number of group members. Surprisingly, this impossibility holds even in an offline setting, where parties have access to the sequence of group operations in advance.
2. Can we realize one CGKA protocol that works as well as possible in all cases? Here again, we present negative evidence showing that no such protocol based on black-box use of public-key encryption exists. Specifically, we show two distributions over sequences of group operations such that no CGKA protocol obtains optimal communication costs on both sequences.
1. Can we have CGKA protocols that are efficient in the worst case? We start by answering this basic question in the negative. First, we show that a natural primitive that we call Compact Key Exchange (CKE) is at the core of CGKA, and thus tightly captures CGKA's worst-case communication cost. Intuitively, CKE requires that: first, $n$ users non-interactively generate key pairs and broadcast their public keys, then, some other special user securely communicates to these $n$ users a shared key. Next, we show that CKE with communication cost $o(n)$ by the special user cannot be realized in a black-box manner from public-key encryption, thus implying the same for CGKA, where $n$ is the corresponding number of group members. Surprisingly, this impossibility holds even in an offline setting, where parties have access to the sequence of group operations in advance.
2. Can we realize one CGKA protocol that works as well as possible in all cases? Here again, we present negative evidence showing that no such protocol based on black-box use of public-key encryption exists. Specifically, we show two distributions over sequences of group operations such that no CGKA protocol obtains optimal communication costs on both sequences.
Lalita Devadas, Rishab Goyal, Yael Kalai, Vinod Vaikuntanathan
We present a rate-$1$ construction of a publicly verifiable non-interactive argument system for batch-$\mathsf{NP}$ (also called a BARG), under the LWE assumption. Namely, a proof corresponding to a batch of $k$ NP statements each with an $m$-bit witness, has size $m + \mathsf{poly}(\lambda,\log k)$.
In contrast, prior work either relied on non-standard knowledge assumptions, or produced proofs of size $m \cdot \mathsf{poly}(\lambda,\log k)$ (Choudhuri, Jain, and Jin, STOC 2021, following Kalai, Paneth, and Yang 2019). We show how to use our rate-$1$ BARG scheme to obtain the following results, all under the LWE assumption: - A multi-hop BARG scheme for $\mathsf{NP}$. - A multi-hop aggregate signature scheme (in the standard model). - An incrementally verifiable computation (IVC) scheme for arbitrary $T$-time deterministic computations with proof size $\mathsf{poly}(\lambda,\log T)$. Prior to this work, multi-hop BARGs were only known under non-standard knowledge assumptions or in the random oracle model; aggregate signatures were only known under indistinguishability obfuscation (and RSA) or in the random oracle model; IVC schemes with proofs of size $\mathsf{poly}(\lambda,T^{\epsilon})$ were known under a bilinear map assumption, and with proofs of size $\mathsf{poly}(\lambda,\log T)$ under non-standard knowledge assumptions or in the random oracle model.
In contrast, prior work either relied on non-standard knowledge assumptions, or produced proofs of size $m \cdot \mathsf{poly}(\lambda,\log k)$ (Choudhuri, Jain, and Jin, STOC 2021, following Kalai, Paneth, and Yang 2019). We show how to use our rate-$1$ BARG scheme to obtain the following results, all under the LWE assumption: - A multi-hop BARG scheme for $\mathsf{NP}$. - A multi-hop aggregate signature scheme (in the standard model). - An incrementally verifiable computation (IVC) scheme for arbitrary $T$-time deterministic computations with proof size $\mathsf{poly}(\lambda,\log T)$. Prior to this work, multi-hop BARGs were only known under non-standard knowledge assumptions or in the random oracle model; aggregate signatures were only known under indistinguishability obfuscation (and RSA) or in the random oracle model; IVC schemes with proofs of size $\mathsf{poly}(\lambda,T^{\epsilon})$ were known under a bilinear map assumption, and with proofs of size $\mathsf{poly}(\lambda,\log T)$ under non-standard knowledge assumptions or in the random oracle model.
Tianshu Shan, Jiangxia Ge, Rui Xue
The post-quantum security of cryptographic systems assumes that the quantum adversary only receives the classical result of computations with the secret key. Furthermore, if the adversary is able to obtain a superposition state of the result, it is unknown whether the post-quantum secure schemes still remain secure.
In this paper, we formalize one class of public-key encryption schemes, named oracle-masked schemes, relative to random oracles. For each oracle-masked scheme, we design a preimage extraction procedure and prove that it simulates the quantum decryption oracle with a certain loss. We also observe that the implementation of the preimage extraction procedure for some oracle-masked schemes does not need to take the secret key as input. This contributes to the IND-qCCA security proof of these schemes in the quantum random oracle model (QROM). As an application, we prove the IND-qCCA security of schemes obtained by the Fujisaki-Okamoto (FO) transformation and REACT transformation in the QROM, respectively.
Notably, our security reduction for FO transformation is tighter than the reduction given by Zhandry (Crypto 2019).
In this paper, we formalize one class of public-key encryption schemes, named oracle-masked schemes, relative to random oracles. For each oracle-masked scheme, we design a preimage extraction procedure and prove that it simulates the quantum decryption oracle with a certain loss. We also observe that the implementation of the preimage extraction procedure for some oracle-masked schemes does not need to take the secret key as input. This contributes to the IND-qCCA security proof of these schemes in the quantum random oracle model (QROM). As an application, we prove the IND-qCCA security of schemes obtained by the Fujisaki-Okamoto (FO) transformation and REACT transformation in the QROM, respectively.
Notably, our security reduction for FO transformation is tighter than the reduction given by Zhandry (Crypto 2019).
Soumya Chattopadhyay, Ashwin Jha, Mridul Nandi
OMAC --- a single-keyed variant of CBC-MAC by Iwata and Kurosawa --- is a widely used and standardized (NIST FIPS 800-38B, ISO/IEC 29167-10:2017) message authentication code (MAC) algorithm. The best security bound for OMAC is due to Nandi who proved that OMAC's pseudorandom function (PRF) advantage is upper bounded by $ O(q^2\ell/2^n) $, where $ n $, $ q $, and $ \ell $, denote the block size of the underlying block cipher, the number of queries, and the maximum permissible query length (in terms of $ n $-bit blocks), respectively. In contrast, there is no attack with matching lower bound. Indeed, the best known attack on OMAC is the folklore birthday attack achieving a lower bound of $ \Omega(q^2/2^n) $. In this work, we close this gap for a large range of message lengths. Specifically, we show that OMAC's PRF security is upper bounded by $ O(q^2/2^n + q\ell^2/2^n)$. In practical terms, this means that for a $ 128 $-bit block cipher, and message lengths up to $ 64 $ Gigabyte, OMAC can process up to $ 2^{64} $ messages before rekeying (same as the birthday bound). In comparison, the previous bound only allows $ 2^{48} $ messages. As a side-effect of our proof technique, we also derive similar tight security bounds for XCBC (by Black and Rogaway) and TMAC (by Kurosawa and Iwata). As a direct consequence of this work, we have established tight security bounds (in a wide range of $\ell$) for all the CBC-MAC variants, except for the original CBC-MAC.
Yevgeniy Dodis, Daniel Jost, Harish Karthikeyan
Forward-secure encryption (FSE) allows communicating parties to refresh their keys across epochs, in a way that compromising the current secret key leaves all prior encrypted communication secure. We investigate a novel dimension in the design of FSE schemes: fast-forwarding (FF). This refers to the ability of a stale communication party, that is "stuck" in an old epoch, to efficiently "catch up" to the newest state, and frequently arises in practice. While this dimension was not explicitly considered in prior work, we observe that one can augment prior FSEs -- both in symmetric- and public-key settings -- to support fast-forwarding which is sublinear in the number of epochs. However, the resulting schemes have disadvantages: the symmetric-key scheme is a security parameter slower than any conventional stream cipher, while the public-key scheme inherits the inefficiencies of the HIBE-based forward-secure PKE.
To address these inefficiencies, we look at the common real-life situation which we call the bulletin board model, where communicating parties rely on some infrastructure -- such as an application provider -- to help them store and deliver ciphertexts to each other. We then define and construct FF-FSE in the bulletin board model, which addresses the above-mentioned disadvantages. In particular,
* Our FF-stream-cipher in the bulletin-board model has: (a) constant state size; (b) constant normal (no fast-forward) operation; and (c) logarithmic fast-forward property. This essentially matches the efficiency of non-fast-forwardable stream ciphers, at the cost of constant communication complexity with the bulletin board per update.
* Our public-key FF-FSE avoids HIBE-based techniques by instead using so-called updatable public-key encryption (UPKE), introduced in several recent works (and more efficient than public-key FSEs). Our UPKE-based scheme uses a novel type of "update graph" that we construct in this work. Our graph has constant in-degree, logarithmic diameter, and logarithmic "cut property" which is essential for the efficiency of our schemes. Combined with recent UPKE schemes, we get two FF-FSEs in the bulletin board model, under the DDH and the LWE assumptions.
To address these inefficiencies, we look at the common real-life situation which we call the bulletin board model, where communicating parties rely on some infrastructure -- such as an application provider -- to help them store and deliver ciphertexts to each other. We then define and construct FF-FSE in the bulletin board model, which addresses the above-mentioned disadvantages. In particular,
* Our FF-stream-cipher in the bulletin-board model has: (a) constant state size; (b) constant normal (no fast-forward) operation; and (c) logarithmic fast-forward property. This essentially matches the efficiency of non-fast-forwardable stream ciphers, at the cost of constant communication complexity with the bulletin board per update.
* Our public-key FF-FSE avoids HIBE-based techniques by instead using so-called updatable public-key encryption (UPKE), introduced in several recent works (and more efficient than public-key FSEs). Our UPKE-based scheme uses a novel type of "update graph" that we construct in this work. Our graph has constant in-degree, logarithmic diameter, and logarithmic "cut property" which is essential for the efficiency of our schemes. Combined with recent UPKE schemes, we get two FF-FSEs in the bulletin board model, under the DDH and the LWE assumptions.
Julia Kastner, Julian Loss, Jiayu Xu
Partially blind signatures, an extension of ordinary blind signatures, are a primitive with wide applications in e-cash and electronic voting. One of the most efficient schemes to date is the one by Abe and Okamoto (CRYPTO 2000), whose underlying idea - the OR-proof technique - has served as the basis for several works.
We point out several subtle flaws in the original proof of security, and provide a new detailed and rigorous proof, achieving similar bounds as the original work. We believe our insights on the proof strategy will find useful in the security analyses of other OR-proof-based schemes.
We point out several subtle flaws in the original proof of security, and provide a new detailed and rigorous proof, achieving similar bounds as the original work. We believe our insights on the proof strategy will find useful in the security analyses of other OR-proof-based schemes.
Gianluca Brian, Sebastian Faust, Elena Micheli, Daniele Venturi
Non-malleable codes (Dziembowski, Pietrzak and Wichs, ICS 2010 & JACM 2018) allow protecting arbitrary cryptographic primitives against related-key attacks (RKAs). Even when using codes that are guaranteed to be non-malleable against a single tampering attempt, one obtains RKA security against poly-many tampering attacks at the price of assuming perfect memory erasures. In contrast, continuously non-malleable codes (Faust, Mukherjee, Nielsen and Venturi, TCC 2014) do not suffer from this limitation, as the non-malleability guarantee holds against poly-many tampering attempts.
Unfortunately, there are only a handful of constructions of continuously non-malleable codes, while standard non-malleable codes are known for a large variety of tampering families including, e.g., NC0 and decision-tree tampering, AC0, and recently even bounded polynomial-depth tampering. We change this state of affairs by providing the first constructions of continuously non-malleable codes in the following natural settings:
- Against decision-tree tampering, where, in each tampering attempt, every bit of the tampered codeword can be set arbitrarily after adaptively reading up to $d$ locations within the input codeword. Our scheme is in the plain model, can be instantiated assuming the existence of one-way functions, and tolerates tampering by decision trees of depth $d = O(n^{1/8})$, where $n$ is the length of the codeword. Notably, this class includes NC0.
- Against bounded polynomial-depth tampering, where in each tampering attempt the adversary can select any tampering function that can be computed by a circuit of bounded polynomial depth (and unbounded polynomial size). Our scheme is in the common reference string model, and can be instantiated assuming the existence of time-lock puzzles and simulation-extractable (succinct) non-interactive zero-knowledge proofs.
Julien Duman, Dominik Hartmann, Eike Kiltz, Sabrina Kunzweiler, Jonas Lehmann, Doreen Riepel
In the context of quantum-resistant cryptography, cryptographic group actions offer an abstraction of isogeny-based cryptography in the Commutative Supersingular Isogeny Diffie-Hellman (CSIDH) setting. In this work, we revisit the security of two previously proposed natural protocols: the Group Action Hashed ElGamal key encapsulation mechanism (GA-HEG KEM) and the Group Action Hashed Diffie-Hellman non-interactive key-exchange (GA-HDH NIKE) protocol. The latter protocol has already been considered to be used in practical protocols such as Post-Quantum WireGuard (S&P '21) and OPTLS (CCS '20).
We prove that active security of the two protocols in the Quantum Random Oracle Model (QROM) inherently relies on very strong variants of the Group Action Strong CDH problem, where the adversary is given arbitrary quantum access to a DDH oracle. That is, quantum accessible Strong CDH assumptions are not only sufficient but also necessary to prove active security of the GA-HEG KEM and the GA-HDH NIKE protocols.
Furthermore, we propose variants of the protocols with QROM security from the classical Strong CDH assumption, i.e., CDH with classical access to the DDH oracle. Our first variant uses key confirmation and can therefore only be applied in the KEM setting. Our second but considerably less efficient variant is based on the twinning technique by Cash et al. (EUROCRYPT '08) and in particular yields the first actively secure isogeny-based NIKE with QROM security from the standard CDH assumption.
We prove that active security of the two protocols in the Quantum Random Oracle Model (QROM) inherently relies on very strong variants of the Group Action Strong CDH problem, where the adversary is given arbitrary quantum access to a DDH oracle. That is, quantum accessible Strong CDH assumptions are not only sufficient but also necessary to prove active security of the GA-HEG KEM and the GA-HDH NIKE protocols.
Furthermore, we propose variants of the protocols with QROM security from the classical Strong CDH assumption, i.e., CDH with classical access to the DDH oracle. Our first variant uses key confirmation and can therefore only be applied in the KEM setting. Our second but considerably less efficient variant is based on the twinning technique by Cash et al. (EUROCRYPT '08) and in particular yields the first actively secure isogeny-based NIKE with QROM security from the standard CDH assumption.
16 September 2022
Benoît Libert, Ky Nguyen, Alain Passelègue
Chakraborty, Prabhakaran, and Wichs (PKC'20) recently introduced a new tag-based variant of lossy trapdoor functions, termed cumulatively all-lossy-but-one trapdoor functions (CALBO-TDFs). Informally, CALBO-TDFs allow defining a public tag-based function with a (computationally hidden) special tag, such that the function is lossy for all tags except when the special secret tag is used. In the latter case, the function becomes injective and efficiently invertible using a secret trapdoor. This notion has been used to obtain advanced constructions of signatures with strong guarantees against leakage and tampering, and also by Dodis, Vaikunthanathan, and Wichs (EUROCRYPT'20) to obtain constructions of randomness extractors with extractor-dependent sources. While these applications are motivated by practical considerations, the only known instantiation of CALBO-TDFs so far relies on the existence of indistinguishability obfuscation.
In this paper, we propose the first two instantiations of CALBO-TDFs based on standard assumptions. Our constructions are based on the LWE assumption with a sub-exponential approximation factor and on the DCR assumption, respectively, and circumvent the use of indistinguishability obfuscation by relying on lossy modes and trapdoor mechanisms enabled by these assumptions.
In this paper, we propose the first two instantiations of CALBO-TDFs based on standard assumptions. Our constructions are based on the LWE assumption with a sub-exponential approximation factor and on the DCR assumption, respectively, and circumvent the use of indistinguishability obfuscation by relying on lossy modes and trapdoor mechanisms enabled by these assumptions.
Federico Canale, Tim Güneysu, Gregor Leander, Jan Thoma, Yosuke Todo, Rei Ueno
Randomized cache architectures have proven to significantly
increase the complexity of contention-based cache side channel attacks
and therefore pre\-sent an important building block for side channel secure
microarchitectures. By
randomizing the address-to-cache-index mapping, attackers can
no longer trivially construct minimal eviction sets which are
fundamental for contention-based cache attacks. At the same time,
randomized caches maintain the flexibility of traditional caches,
making them broadly applicable across various CPU-types. This is
a major advantage over cache partitioning approaches.
A large variety of randomized cache architectures has been proposed. However, the actual randomization function received little attention and is often neglected in these proposals. Since the randomization operates directly on the critical path of the cache lookup, the function needs to have extremely low latency. At the same time, attackers must not be able to bypass the randomization which would nullify the security benefit of the randomized mapping. In this paper we propose \cipher (\underline{S}ecure \underline{CA}che \underline{R}andomization \underline{F}unction), the first dedicated cache randomization cipher which achieves low latency and is cryptographically secure in the cache attacker model. The design methodology for this dedicated cache cipher enters new territory in the field of block ciphers with a small 10-bit block length and heavy key-dependency in few rounds.
A large variety of randomized cache architectures has been proposed. However, the actual randomization function received little attention and is often neglected in these proposals. Since the randomization operates directly on the critical path of the cache lookup, the function needs to have extremely low latency. At the same time, attackers must not be able to bypass the randomization which would nullify the security benefit of the randomized mapping. In this paper we propose \cipher (\underline{S}ecure \underline{CA}che \underline{R}andomization \underline{F}unction), the first dedicated cache randomization cipher which achieves low latency and is cryptographically secure in the cache attacker model. The design methodology for this dedicated cache cipher enters new territory in the field of block ciphers with a small 10-bit block length and heavy key-dependency in few rounds.
George Lu, Brent Waters
The random oracle methodology is central to the design of many practical
cryptosystems. A common challenge faced in several systems is the need to have
a random oracle that outputs from a structured distribution $\mathcal{D}$, even though most
heuristic implementations such as SHA-3 are best suited for outputting bitstrings.
Our work explores the problem of sampling from discrete Gaussian (and related) distributions in a manner that they can be programmed into random oracles. We make the following contributions:
-We provide a definitional framework for our results. We say that a sampling algorithm $\mathsf{Sample}$ for a distribution is explainable if there exists an algorithm $\mathsf{Explain}$ where, for a $x$ in the domain, we have that $\mathsf{Explain}(x) \rightarrow r \in \{0,1\}^n$ such that $\mathsf{Sample}(r)=x$. Moreover, if $x$ is sampled from $\mathcal{D}$ the explained distribution is statistically close to choosing $r$ uniformly at random. We consider a variant of this definition that allows the statistical closeness to be a "precision parameter'' given to the $\mathsf{Explain}$ algorithm. We show that sampling algorithms which satisfy our `explainability' property can be programmed as a random oracle.
-We provide a simple algorithm for explaining \emph{any} sampling algorithm that works over distributions with polynomial sized ranges. This includes discrete Gaussians with small standard deviations.
-We show how to transform a (not necessarily explainable) sampling algorithm $\mathsf{Sample}$ for a distribution into a new $\mathsf{Sample}'$ that is explainable. The requirements for doing this is that (1) the probability density function is efficiently computable (2) it is possible to efficiently uniformly sample from all elements that have a probability density above a given threshold $p$, showing the equivalence of random oracles to these distributions and random oracles to uniform bitstrings. This includes a large class of distributions, including all discrete Gaussians.
-A potential drawback of the previous approach is that the transformation requires an additional computation of the density function. We provide a more customized approach that shows the Miccancio-Walter discrete Gaussian sampler is explainable as is. This suggests that other discrete Gaussian samplers in a similar vein might also be explainable as is.
Our work explores the problem of sampling from discrete Gaussian (and related) distributions in a manner that they can be programmed into random oracles. We make the following contributions:
-We provide a definitional framework for our results. We say that a sampling algorithm $\mathsf{Sample}$ for a distribution is explainable if there exists an algorithm $\mathsf{Explain}$ where, for a $x$ in the domain, we have that $\mathsf{Explain}(x) \rightarrow r \in \{0,1\}^n$ such that $\mathsf{Sample}(r)=x$. Moreover, if $x$ is sampled from $\mathcal{D}$ the explained distribution is statistically close to choosing $r$ uniformly at random. We consider a variant of this definition that allows the statistical closeness to be a "precision parameter'' given to the $\mathsf{Explain}$ algorithm. We show that sampling algorithms which satisfy our `explainability' property can be programmed as a random oracle.
-We provide a simple algorithm for explaining \emph{any} sampling algorithm that works over distributions with polynomial sized ranges. This includes discrete Gaussians with small standard deviations.
-We show how to transform a (not necessarily explainable) sampling algorithm $\mathsf{Sample}$ for a distribution into a new $\mathsf{Sample}'$ that is explainable. The requirements for doing this is that (1) the probability density function is efficiently computable (2) it is possible to efficiently uniformly sample from all elements that have a probability density above a given threshold $p$, showing the equivalence of random oracles to these distributions and random oracles to uniform bitstrings. This includes a large class of distributions, including all discrete Gaussians.
-A potential drawback of the previous approach is that the transformation requires an additional computation of the density function. We provide a more customized approach that shows the Miccancio-Walter discrete Gaussian sampler is explainable as is. This suggests that other discrete Gaussian samplers in a similar vein might also be explainable as is.
Hao Guo, Jintai Ding
We give algebraic relations among equations of three algebraic modelings for MinRank problem: support minors modeling, Kipnis–Shamir modeling and minors modeling.