International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

22 February 2023

Usman Ali, Hamza Omar, Chujiao Ma, Vaibhav Garg, Omar Khan
ePrint Report ePrint Report
Hardware-based Root of Trust (HRT) is considered the gold standard for bootstrapping trust in secure computing. This paper analyzes HRT implementations across state-of-the-art TEEs and differentiates HRT implementation across two dimensions: 1) Security Properties & Threats and 2) Hardware Capabilities. Later, this work analyzes and compares 1) Intel SGX, 2) ARM TrustZone, 3) NXP Trust Architecture, 4) AMD SEV, 5) Microsoft Pluton, and 6) Apple T2 HRTs in terms of threats, security properties, and capabilities.
Expand
Dan Boneh, Jiaxin Guan, Mark Zhandry
ePrint Report ePrint Report
We give the first black box lower bound for signature protocols that can be described as group actions, which include many based on isogenies. We show that, for a large class of signature schemes making black box use of a (potentially non-abelian) group action, the signature length must be $\Omega(\lambda^2/\log\lambda)$. Our class of signatures generalizes all known signatures that derive security exclusively from the group action, and our lower bound matches the state of the art, showing that the signature length cannot be improved without deviating from the group action framework.
Expand
Fabio Banfi, Konstantin Gegier, Martin Hirt, Ueli Maurer
ePrint Report ePrint Report
Anamorphic Encryption, recently introduced by Persiano, Phan, and Yung (EUROCRYPT 2022) is a new cryptographic paradigm challenging the conventional notion of an adversary. In particular they consider the receiver-anamorphic setting, where a dictator is able to obtain the receiver's secret key of a well-established public-key encryption (PKE) scheme, and they ask the question whether the sender can still embed covert messages in a way which the dictator is completely oblivious to, if sender and receiver share an anamorphic key.

In this work, we identify two definitional limitations of Persiano et al.'s original model. First, they require anamorphic keys and key-pairs to be generated together, so a first modification we propose is to decouple the two processes. We allow for the extension of a regular PKE scheme to an anamorphic one to be possible on the fly, even after the public key of the regular scheme is already in use. Second, in their model the receiver cannot distinguish whether or not a ciphertext contains a covert message, so we propose a natural robustness notion which states that when anamorphically decrypting a regularly encrypted message, the receiver explicitly sees that no covert message is contained. This also eliminates certain attacks possible for the original definition.

Regarding new constructions, we first propose a generic anamorphic extension that achieves robustness for any PKE scheme, but requires synchronization of sender and receiver. We then define a natural property of a PKE scheme, selective randomness recoverability, which allows for a robust anamorphic extension even for unsynchronized parties. We show that the well-established schemes of ElGamal and Cramer-Shoup satisfy this condition. Finally, we propose a generic transformation of any non-robust anamorphic extension into a robust one, and apply it to a synchronized anamorphic extension for RSA-OAEP.
Expand

21 February 2023

Paul Rösler, Daniel Slamanig, Christoph Striecks
ePrint Report ePrint Report
Hierarchical Identity Based Encryption (HIBE) is a well studied, versatile tool used in many cryptographic protocols. Yet, since the performance of all known HIBE constructions is broadly considered prohibitive, some real-world applications avoid relying on HIBE at the expense of security. A prominent example for this is secure messaging: Strongly secure messaging protocols are provably equivalent to Key-Updatable Key Encapsulation Mechanisms (KU-KEMs; Balli et al., Asiacrypt 2020); so far, all KU-KEM constructions rely on adaptive unbounded-depth HIBE (Poettering and Rösler, Jaeger and Stepanovs, both CRYPTO 2018). By weakening security requirements for better efficiency, many messaging protocols dispense with using HIBE.

In this work, we aim to gain better efficiency without sacrificing security. For this, we observe that applications like messaging only need a restricted variant of HIBE for strong security. This variant, that we call Unique-Path Identity Based Encryption (UPIBE), restricts HIBE by requiring that each secret key can delegate at most one subordinate secret key. However, in contrast to fixed secret key delegation in Forward-Secure Public Key Encryption, the delegation in UPIBE, as in HIBE, is uniquely determined by variable identity strings from an exponentially large space. We investigate this mild but surprisingly effective restriction and show that it offers substantial complexity and performance advantages.

More concretely, we generically build bounded-depth UPIBE from only bounded-collusion IBE in the standard model; and we generically build adaptive unbounded-depth UPIBE from only selective bounded-depth HIBE in the random oracle model. These results significantly extend the range of underlying assumptions and efficient instantiations. We conclude with a rigorous performance evaluation of our UPIBE design. Beyond solving challenging open problems by reducing complexity and improving efficiency of KU-KEM and strongly secure messaging protocols, we offer a new definitional perspective on the bounded-collusion setting.
Expand
Qian Guo, Thomas Johansson, Vu Nguyen
ePrint Report ePrint Report
The problem of decoding random codes is a fundamental problem for code-based cryptography, including recent code-based candidates in the NIST post-quantum standardization process. In this paper, we present a novel sieving-style information-set decoding (ISD) algorithm for solving the syndrome decoding problem. The essential idea is to keep a list of weight-$2p$ solution vectors to a partial syndrome decoding problem and then create new vectors by finding pairs of vectors that collide in $p$ positions. By increasing the parity-check condition by one position and then iteratively repeating this process, we find the final solution(s). We show that while being competitive in terms of performance, our novel algorithm requires significantly less memory compared to other ISD variants. Also, in the case of problems with very low relative weight, it seems to outperform all previous algorithms. In particular, for code-based candidates BIKE and HQC, the algorithm has lower bit complexity than the previous best results.
Expand
Manuel Barbosa, Gilles Barthe, Christian Doczkal, Jelle Don, Serge Fehr, Benjamin Grégoire, Yu-Hsuan Huang, Andreas Hülsing, Yi Lee, Xiaodi Wu
ePrint Report ePrint Report
We extend and consolidate the security justification for the Dilithium signature scheme. In particular, we identify a subtle but crucial gap that appears in several ROM and QROM security proofs for signature schemes that are based on the Fiat-Shamir with aborts paradigm, including Dilithium. The gap lies in the CMA-to-NMA reduction and was uncovered when trying to formalize a variant of the QROM security proof by Kiltz, Lyubashevsky, and Schaffner (Eurocrypt 2018). The gap was confirmed by the authors, and there seems to be no simple patch for it. We provide new, fixed proofs for the affected CMA-to-NMA reduction, both for the ROM and the QROM, and we perform a concrete security analysis for the case of Dilithium to show that the claimed security level is still valid after addressing the gap. Furthermore, we offer a fully mechanized ROM proof for the CMA-security of Dilithium in the EasyCrypt proof assistant. Our formalization includes several new tools and techniques of independent interest for future formal verification results.
Expand
Julien Devevey, Pouria Fallahpour, Alain Passelègue, Damien Stehlé
ePrint Report ePrint Report
Lyubashevky's signatures are based on the Fiat-Shamir with Aborts paradigm. It transforms an interactive identification protocol that has a non-negligible probability of aborting into a signature by repeating executions until a loop iteration does not trigger an abort. Interaction is removed by replacing the challenge of the verifier by the evaluation of a hash function, modeled as a random oracle in the analysis. The access to the random oracle is classical (ROM), resp. quantum (QROM), if one is interested in security against classical, resp. quantum, adversaries. Most analyses in the literature consider a setting with a bounded number of aborts (i.e., signing fails if no signature is output within a prescribed number of loop iterations), while practical instantiations (e.g., Dilithium) run until a signature is output (i.e., loop iterations are unbounded).

In this work, we emphasize that combining random oracles with loop iterations induces numerous technicalities for analyzing correctness, run-time, and security of the resulting schemes, both in the bounded and unbounded case. As a first contribution, we put light on errors in all existing analyses. We then provide two detailed analyses in the QROM for the bounded case, adapted from Kiltz et al. [EUROCRYPT'18] and Grilo et al. [ASIACRYPT'21]. In the process, we prove the underlying $\Sigma$-protocol to achieve a stronger zero-knowledge property than usually considered for $\Sigma$-protocols with aborts, which enables a corrected analysis. A further contribution is a detailed analysis in the case of unbounded aborts, the latter inducing several additional subtleties.
Expand
Céline Chevalier, Paul Hermouet, Quoc-Huy Vu
ePrint Report ePrint Report
Properties of quantum mechanics have enabled the emergence of quantum cryptographic protocols achieving important goals which are proven to be impossible classically. Unfortunately, this usually comes at the cost of needing quantum power from every party in the protocol, while arguably a more realistic scenario would be a network of classical clients, classically interacting with a quantum server. In this paper, we focus on copy-protection, which is a quantum primitive that allows a program to be evaluated, but not copied, and has shown interest especially due to its links to other unclonable cryptographic primitives. Our main contribution is to show how to dequantize existing quantum copy-protection from hidden coset states, by giving a construction for classically-instructed remote state preparation for coset states. We also present the first secure copy-protection scheme for point-functions in the plain model, to which our dequantizer can be applied.
Expand
Andre Esser, Rahul Girme, Arindam Mukherjee, Santanu Sarkar
ePrint Report ePrint Report
The LWE problem is one of the prime candidates for building the most efficient post-quantum secure public key cryptosystems. Many of those schemes, like Kyber, Dilithium or those belonging to the NTRU-family, such as NTRU-HPS, -HRSS, BLISS or GLP, make use of small max norm keys to enhance efficiency. The best attack on these schemes is a hybrid attack, which combines combinatorial techniques and lattice reduction. While lattice reduction is not known to be able to exploit the small max norm choices, May recently showed (Crypto 2021) that such choices allow for more efficient combinatorial attacks.

However, these combinatorial attacks suffer enormous memory requirements, which render them inefficient in realistic attack scenarios and, hence, make their general consideration when assessing security questionable. Therefore, more memory-efficient substitutes for these algorithms are needed. In this work, we provide new combinatorial algorithms for recovering small max norm LWE secrets using only a polynomial amount of memory. We provide analyses of our algorithms for secret key distributions of current NTRU, Kyber and Dilithium variants, showing that our new approach outperforms previous memory-efficient algorithms. For instance, considering uniformly random ternary secrets of length $n$ we improve the best known time complexity for polynomial memory algorithms from $2^{1.063n}$ down-to $2^{0.926n}$. We obtain even larger gains for LWE secrets in $\{-m,\ldots,m\}^n$ with $m=2,3$ as found in Kyber and Dilithium. For example, for uniformly random keys in $\{-2,\ldots,2\}^n$ as is the case for Dilithium we improve the previously best time from $2^{1.742n}$ down-to $2^{1.282n}$.

Our fastest algorithm incorporates various different algorithmic techniques, but at its heart lies a nested collision search procedure inspired by the Nested-Rho technique from Dinur, Dunkelman, Keller and Shamir (Crypto 2016). Additionally, we heavily exploit the representation technique originally introduced in the subset sum context to make our nested approach efficient.
Expand
Vittorio Zaccaria
ePrint Report ePrint Report
This work is intended for researchers in the field of side-channel attacks, countermeasure analysis, and probing security. It reports on a formalization of simulatability in terms of linear algebra properties, which we think will provide a useful tool in the practitioner toolbox. The formalization allowed us to revisit some existing definitions (such as probe isolating non-interference) in a simpler way that corresponds to the propagation of erase morphisms. From a theoretical perspective, we shed light into probabilistic definitions of simulatability and matrix-based spectral approaches. This could mean, in practice, that potentially better tools can be built. Readers will find a different, and perhaps less contrived, definition of simulatability, which could enable new forms of reasoning. This work does not cover any practical implementation of the proposed tools, which is left for future work.
Expand
Munawar Hasan, Donghoon Chang
ePrint Report ePrint Report
The widespread deployment of low-power and handheld devices opens an opportunity to design lightweight authenticated encryption schemes. The schemes so proposed must also prove their resilience under various security notions. Romulus-N1 is an authenticated encryption scheme with associated data based on a tweakable blockcipher, a primary variant of Romulus-N family which is NIST (National Institute of Standards and Technology) lightweight cryptography competition finalist; provides beyond birthday bound security for integrity security in nonce respecting scenario but fails to provide the integrity security in nonce misuse and RUP (release of unverified plaintext) scenarios. In this paper, we propose lynx, a family with $14$ members of 1-pass and rate-1 lightweight authenticated encryption schemes with associated data based on a tweakable blockcipher, that provides birthday bound security for integrity security in both nonce respecting as well as nonce misuse and RUP scenarios and birthday bound security for privacy in nonce respecting scenario. For creating such a family of schemes we propose a family of function $\mathcal{F}$ that provides a total of $72$ cases out of which we show that only $14$ of them can be used for creating authenticated encryption schemes. We provide the implementation of one of the members of lynx family on six different hardware platforms and compare it with Romulus-N1. The comparison clearly shows that the lynx member outperforms Romulus-N1 on all the six platforms.
Expand
Baptiste Lambin, Gregor Leander, Patrick Neumann
ePrint Report ePrint Report
In this paper we, for the first time, study the question under which circumstances decomposing a round function of a Substitution-Permutation Network is possible uniquely. More precisely, we provide necessary and sufficient criteria for the non-linear layer on when a decomposition is unique. Our results in particular imply that, when cryptographically strong S-boxes are used, the decomposition is indeed unique.

We then apply our findings to the notion of alignment, pointing out that the previous definition allows for primitives that are both aligned and unaligned simultaneously. As a second result, we present experimental data that shows that alignment might only have limited impact. For this, we compare aligned and unaligned versions of the cipher PRESENT.
Expand
Corentin Jeudy, Adeline Roux-Langlois, Olivier Sanders
ePrint Report ePrint Report
Preimage Sampling is a fundamental process in lattice-based cryptography whose performance directly affects the one of the cryptographic mechanisms that rely on it. In 2012, Micciancio and Peikert proposed a new way of generating trapdoors (and an associated preimage sampling procedure) with very interesting features. Unfortunately, in some applications such as digital signatures, the performance may not be as competitive as other approaches like Fiat-Shamir with Aborts. In this work we revisit the preimage sampling algorithm proposed by Micciancio and Peikert with different contributions. We first propose a finer analysis of this procedure which results in drastic efficiency gains of up to 50% on the preimage sizes without affecting security. It can thus be used as a drop-in replacement in every construction resorting to it. We then propose a new preimage sampling method which still relies on the trapdoors of Micciancio and Peikert, but that also bridges to the Fiat-Shamir with Aborts signature paradigm by leveraging rejection sampling. It again leads to dramatic gains of up to 75% compared to the original sampling technique. This opens promising perspectives for the efficiency of advanced lattice-based constructions relying on such mechanisms. As an application of our new procedure, we give the first lattice-based aggregate signature supporting public aggregation and that achieves relevant compression compared to the concatenation of individual signatures. Our scheme is proven secure in the aggregate chosen-key model coined by Boneh et al. in 2003, based on the well-studied assumptions Module Learning With Errors and Module Short Integer Solution.
Expand
Charlotte Hoffmann, Pavel Hubáček, Chethan Kamath, Krzysztof Pietrzak
ePrint Report ePrint Report
GIMPS and PrimeGrid are large-scale distributed projects dedicated to searching giant prime numbers, usually of special forms like Mersenne and Proth. The numbers in the current search-space are millions of digits large and the participating volunteers need to run resource-consuming primality tests. Once a candidate prime $N$ has been found, the only way for another party to independently verify the primality of $N$ used to be by repeating the expensive primality test. To avoid the need for second recomputation of each primality test, these projects have recently adopted certifying mechanisms that enable efficient verification of performed tests. However, the mechanisms presently in place only detect benign errors and there is no guarantee against adversarial behavior: a malicious volunteer can mislead the project to reject a giant prime as being non-prime. In this paper, we propose a practical, cryptographically-sound mechanism for certifying the non-primality of Proth numbers. That is, a volunteer can -- parallel to running the primality test for $N$ -- generate an efficiently verifiable proof at a little extra cost certifying that $N$ is not prime. The interactive protocol has statistical soundness and can be made non-interactive using the Fiat-Shamir heuristic.

Our approach is based on a cryptographic primitive called Proof of Exponentiation (PoE) which, for a group $\mathbb{G}$, certifies that a tuple $(x,y,T)\in\mathbb{G}^2\times\mathbb{N}$ satisfies $x^{2^T}=y$ (Pietrzak, ITCS 2019 and Wesolowski, J. Cryptol. 2020). In particular, we show how to adapt Pietrzak's PoE at a moderate additional cost to make it a cryptographically-sound certificate of non-primality.
Expand
Keegan Ryan, Nadia Heninger
ePrint Report ePrint Report
We introduce a new lattice basis reduction algorithm with approximation guarantees analogous to the LLL algorithm and practical performance that far exceeds the current state of the art. We achieve these results by iteratively applying precision management techniques within a recursive algorithm structure and show the stability of this approach. We analyze the asymptotic behavior of our algorithm, and show that the heuristic running time is $O(n^{\omega}(C+n)^{1+\varepsilon})$ for lattices of dimension $n$, $\omega\in (2,3]$ bounding the cost of size reduction, matrix multiplication, and QR factorization, and $C$ bounding the log of the condition number of the input basis $B$. This yields a running time of $O\left(n^\omega (p + n)^{1 + \varepsilon}\right)$ for precision $p = O(\log \|B\|_{max})$ in common applications. Our algorithm is fully practical, and we have published our implementation. We experimentally validate our heuristic, give extensive benchmarks against numerous classes of cryptographic lattices, and show that our algorithm significantly outperforms existing implementations.
Expand
Taiga Hiroka, Fuyuki Kitagawa, Tomoyuki Morimae, Ryo Nishimaki, Tapas Pal, Takashi Yamakawa
ePrint Report ePrint Report
We study certified everlasting secure functional encryption (FE) and many other cryptographic primitives in this work. Certified everlasting security roughly means the following. A receiver possessing a quantum cryptographic object (such as ciphertext) can issue a certificate showing that the receiver has deleted the cryptographic object and information included in the object (such as plaintext) was lost. If the certificate is valid, the security is guaranteed even if the receiver becomes computationally unbounded after the deletion. Many cryptographic primitives are known to be impossible (or unlikely) to have information-theoretical security even in the quantum world. Hence, certified everlasting security is a nice compromise (intrinsic to quantum).

In this work, we define certified everlasting secure versions of FE, compute-and-compare obfuscation, predicate encryption (PE), secret-key encryption (SKE), public-key encryption (PKE), receiver non-committing encryption (RNCE), and garbled circuits. We also present the following constructions:

- Adaptively certified everlasting secure collusion-resistant public-key FE for all polynomial-size circuits from indistinguishability obfuscation and one-way functions.

- Adaptively certified everlasting secure bounded collusion-resistant public-key FE for $\mathsf{NC}^1$ circuits from standard PKE.

- Certified everlasting secure compute-and-compare obfuscation from standard fully homomorphic encryption and standard compute-and-compare obfuscation

- Adaptively (resp., selectively) certified everlasting secure PE from standard adaptively (resp., selectively) secure attribute-based encryption and certified everlasting secure compute-and-compare obfuscation. - Certified everlasting secure SKE and PKE from standard SKE and PKE, respectively.

- Certified everlasting secure RNCE from standard PKE.

- Certified everlasting secure garbled circuits from standard SKE.
Expand
Anubhab Baksi, Jakub Breier, Vishnu Asutosh Dasu, Xiaolu Hou, Hyunji Kim, Hwajeong Seo
ePrint Report ePrint Report
Machine Learning (ML) is almost ubiquitously used in multiple disciplines nowadays. Recently, we have seen its usage in the realm of differential distinguishers for symmetric key ciphers. In this work, we explore the possibility of a number of ciphers with respect to various ML-based distinguishers.

We show new distinguishers on the unkeyed and round reduced version of SPECK-32, SPECK-128, ASCON, SIMECK-32, SIMECK-64 and SKINNY-128. We explore multiple avenues in the process. In summary, we use neural network as well as support vector machine in various settings (such as varying the activation function), apart from experimenting with a number of input difference tuples. Among other results, we show a distinguisher of 8-round SPECK-32 that works with practical data complexity (most of the experiments take a few hours on a personal computer).
Expand
Rupeng Yang
ePrint Report ePrint Report
A private puncturable pseudorandom function (PRF) enables one to create a constrained version of a PRF key, which can be used to evaluate the PRF at all but some punctured points. In addition, the constrained key reveals no information about the punctured points and the PRF values on them. Existing constructions of private puncturable PRFs are only proven to be secure against a restricted adversary that must commit to the punctured points before viewing any information. It is an open problem to achieve the more natural adaptive security, where the adversary can make all its choices on-the-fly.

In this work, we solve the problem by constructing an adaptively secure private puncturable PRF from standard lattice assumptions. To achieve this goal, we present a new primitive called explainable hash, which allows one to reprogram the hash function on a given input. The new primitive may find further applications in constructing more cryptographic schemes with adaptive security. Besides, our construction has collusion resistant pseudorandomness, which requires that even given multiple constrained keys, no one could learn the values of the PRF at the punctured points. Private puncturable PRFs with collusion resistant pseudorandomness were only known from multilinear maps or indistinguishability obfuscations in previous works, and we provide the first solution from standard lattice assumptions.
Expand
Varun Narayanan, Vinod M. Prabhakaran, Neha Sangwan, Shun Watanabe
ePrint Report ePrint Report
Unconditionally secure broadcast is feasible among parties connected by pairwise secure links only if there is a strict two-thirds majority of honest parties when no additional resources are available. This limitation may be circumvented when the parties have recourse to additional resources such as correlated randomness. Fitzi, Wolf, and Wullschleger (CRYPTO 2004) attempted to characterize the conditions on correlated randomness shared among three parties which would enable them to realize broadcast. Due to a gap in their impossibility argument, it turns out that their characterization is incorrect. Using a novel construction we show that broadcast is feasible under a considerably larger class of correlations. In fact, we realize pseudo-signatures, which are information theoretic counterparts of digital signatures using which unconditionally secure broadcast may be obtained. We also obtain a matching impossibility result thereby characterizing the class of correlations on which three-party broadcast (and pseudo-signatures) can be based. Our impossibility proof, which extends the well-know argument of Fischer, Lynch and Merritt (Distr. Comp., 1986) to the case where parties observe correlated randomness, maybe of independent interest.
Expand
Martin R. Albrecht, Alex Davidson, Amit Deo, Daniel Gardham
ePrint Report ePrint Report
Partially Oblivious Pseudorandom Functions (POPRFs) are 2-party protocols that allow a client to learn pseudorandom function (PRF) evaluations on inputs of its choice from a server. The client submits two inputs, one public and one private. The security properties ensure that the server cannot learn the private input and the client cannot learn more than one evaluation per POPRF query. POPRFs have many applications including password-based key exchange and privacy-preserving authentication mechanisms. However, most constructions are based on classical assumptions and those with post-quantum security suffer from large efficiency drawbacks.

In this work, we construct a novel POPRF from lattice assumptions and the "Crypto Dark Matter" PRF candidate (TCC'18) in the random oracle model. At a conceptual level, our scheme exploits the alignment of this family of PRF candidates, relying on mixed modulus computations, and programmable bootstrapping in the "3rd gen" torus-fully homomorphic encryption scheme (TFHE). We show that our construction achieves malicious client security based on circuit-private FHE, and client privacy from the semantic security of the FHE scheme. We further explore a heuristic approach to extend our scheme to support verifiability based on the difficulty of computing cheating circuits in low depth. This would yield a verifiable (P)OPRF. We provide a proof-of-concept implementation and benchmarks of our construction using the "Concrete" TFHE software library. For the core online OPRF functionality, client operations take only a few milliseconds, while server evaluation takes less than 3 seconds.
Expand
◄ Previous Next ►