International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

23 February 2023

Dinh Duy Nguyen, Duong Hieu Phan, David Pointcheval
ePrint Report ePrint Report
Joint computation on encrypted data is becoming increasingly crucial with the rise of cloud computing. In theory, multi-party computation (MPC) allows for secure computation, but it is often impractical due to intensive interactions between users. In recent years, the development of multi-client functional encryption (MCFE) has made it possible to perform joint computation on private inputs, without any interaction. Well-settled solutions for linear functions have become efficient and secure, but there is still a shortcoming: if one user inputs incorrect data, the output of the function might become meaningless for all other users (while still useful for the malicious user). To address this issue, the concept of verifiable functional encryption was introduced by Badrinarayanan et al. at Asiacrypt '16 (BGJS). However, their solution was impractical because of strong statistical requirements. More recently, Bell et al. introduced a related concept for secure aggregation, with their ACORN solution, but it requires multiple rounds of interactions between users. In this paper, - we first propose a computational definition of verifiability for MCFE. Our notion covers the computational version of BGJS and extends it to handle any valid inputs defined by predicates. The BGJS notion corresponds to the particular case of a fixed predicate, in our setting. - we then design a concrete construction of verifiable MCFE for inner-product computations where the inputs are within a range. Verifiability cannot be easily obtained from classical proof systems only because the encryption key is usually secret in MCFE and the encryptor can maliciously perform the encryption without being detected. So we need to effectively combine different techniques such as commitments and range proofs to achieve the verifiability. Our approach can also be applied to input validation for secure aggregation as a special case.
Expand
Florian Hirner, Ahmet Can Mert, Sujoy Sinha Roy
ePrint Report ePrint Report
Emerging cryptographic algorithms such as fully homomorphic encryption (FHE) and zero-knowledge proof (ZKP) perform arithmetic involving very large polynomials. One fundamental and time-consuming polynomial operation is the Number theoretic transform (NTT) which is a generalization of the fast Fourier transform. Hardware platforms such as FPGAs could be used to accelerate the NTTs in FHE and ZKP protocols. One major problem is that the FHE and ZKP protocols require different parameter sets, e.g., polynomial degree and coefficient size, depending on their applications. Therefore, a basic research question is: How to design scalable hardware architectures for accelerating NTTs in the FHE and ZKP protocols? In this paper, we present ‘PROTEUS’, an open-source and parametric tool that generates synthesizable bandwidth-efficient NTT architectures for user-specified parameter sets. The architectures can be tuned to utilize different memory bandwidths and parameters which is a very important design requirement in both FHE and ZKP protocols. The generated NTT architectures show a significant performance speedup compared to similar NTT architectures on FPGA. Further comparisons with state-of-the-art show a reduction of up to 23% and 35% in terms of DSP and BRAM utilization.
Expand
Panos Kampanakis, Tancrède Lepoint
ePrint Report ePrint Report
Cryptographic algorithms are vital components ensuring the privacy and security of computer systems. They have constantly improved and evolved over the years following new developments, attacks, breaks, and lessons learned. A recent example is that of quantum-resistant cryptography, which has gained a lot of attention in the last decade and is leading to new algorithms being standardized today. These algorithms, however, present a real challenge: they come with strikingly different size and performance characteristics than their classical counterparts. At the same time, common foundational aspects of our transport protocols have lagged behind as the Internet remains a very diverse space in which different use-cases and parts of the world have different needs.

This vision paper motivates more research and possible standards updates related to the upcoming quantum-resistant cryptography migration. It stresses the importance of amplification reflection attacks and congestion control concerns in transport protocols and presents research and standardization takeaways for assessing the impact and the efficacy of potential countermeasures. It emphasizes the need to go beyond the standardization of key encapsulation mechanisms in order to address the numerous protocols and deployments of public-key encryption while avoiding pitfalls. Finally, it motivates the critical need for research in anonymous credentials and blind signatures at the core of numerous deployments and standardization efforts aimed at providing privacy-preserving trust signals.
Expand
James Bartusek, Sanjam Garg, Vipul Goyal, Dakshita Khurana, Giulio Malavolta, Justin Raizes, Bhaskar Roberts
ePrint Report ePrint Report
Can we outsource computation on encrypted data, while ensuring that the data is certifiably, information-theoretically deleted by the server after computation? Can we encode a computer program in a manner that preserves its functionality, while allowing an evaluator to {\em prove that they deleted the program}?

This work answers the above questions, providing the first fully (maliciously) secure solution to the question of blind delegation with certified deletion, and the first solution to the question of obfuscation with certified deletion. Unlike prior work on deletion, these settings require security in the presence of repeated access to partial decryptions of encoded data, followed by certified deletion of the (rest of the) encoded data. To enable security, we introduce a powerful new paradigm for secure information-theoretic deletion of data based on quantum \emph{subspace coset states}. We obtain the following results.

Blind Delegation with Certified Deletion - Assuming the quantum hardness of learning with errors, we obtain maliciously-secure blind delegation with certified deletion. This improves upon prior protocols by Poremba (ITCS 2023) and Bartusek and Khurana (arXiv 2022) that we show are insecure against a malicious server. - Assuming sub-exponentially quantum-secure indistinguishability obfuscation, we obtain a \emph{two-message} protocol for blind delegation with certified deletion. All previous protocols required multiple rounds of interaction between the client and server.

Obfuscation with Certified Deletion - Assuming post-quantum indistinguishability obfuscation, we obtain a construction of differing-inputs obfuscation with certified deletion, for a polynomial number of differing inputs. As an immediate corollary, we obtain a strong variant of secure software leasing for every differing-inputs circuit family. - We obtain two flavors of functional encryption with certified deletion, one where ciphertexts can be certifiably deleted, and the other where secret keys can be certifiably deleted, assuming appropriate variants of indistinguishability obfuscation and other standard assumptions. - We show how to prepare an ``oracle with certified deletion'' implementing any efficient classical functionality.

Additional Results - Assuming post-quantum CCA-secure public-key encryption, we obtain a notion of CCA-secure public-key encryption with certified deletion. We view this primarily as a pedagogical tool towards understanding our technique. - Assuming post-quantum indistinguishability obfuscation, we show how to generically add a \emph{publicly-verifiable} certified deletion property to a variety of cryptosystems. Publicly-verifiable deletion schemes prior to our work either relied on unproven conjectures (Poremba, ITCS 2023) or structured oracles (Hiroka et al., Asiacrypt 2021).

All our primitives satisfy {\em everlasting security after deletion}, except for functional encryption with deletion for secret keys, where a computational certified deletion guarantee is inherent.
Expand
Shweta Agrawal, Fuyuki Kitagawa, Ryo Nishimaki, Shota Yamada, Takashi Yamakawa
ePrint Report ePrint Report
We introduce the notion of public key encryption with secure key leasing (PKE-SKL). Our notion supports the leasing of decryption keys so that a leased key achieves the decryption functionality but comes with the guarantee that if the quantum decryption key returned by a user passes a validity test, then the user has lost the ability to decrypt. Our notion is similar in spirit to the notion of secure software leasing (SSL) introduced by Ananth and La Placa (Eurocrypt 2021) but captures significantly more general adversarial strategies. In more detail, our adversary is not restricted to use an honest evaluation algorithm to run pirated software. Our results can be summarized as follows:

1. Definitions: We introduce the definition of PKE with secure key leasing and formalize a security notion that we call indistinguishability against key leasing attacks (IND-KLA security). We also define a one-wayness notion for PKE-SKL that we call OW-KLA security and show that an OW-KLA secure PKE-SKL scheme can be lifted to an IND-KLA secure one by using the (quantum) Goldreich-Levin lemma. 2. Constructing IND-KLA PKE with Secure Key Leasing: We provide a construction of OW-KLA secure PKE-SKL (which implies IND-KLA secure PKE-SKL as discussed above) by leveraging a PKE scheme that satisfies a new security notion that we call consistent or inconsistent security against key leasing attacks (CoIC-KLA security). We then construct a CoIC-KLA secure PKE scheme using 1-key Ciphertext-Policy Functional Encryption (CPFE) that in turn can be based on any IND-CPA secure PKE scheme.

3. Identity Based Encryption, Attribute Based Encryption and Functional Encryption with Secure Key Leasing: We provide definitions of secure key leasing in the context of advanced encryption schemes such as identity based encryption (IBE), attribute-based encryption (ABE) and functional encryption (FE). Then we provide constructions by combining the above PKE-SKL with standard IBE, ABE and FE schemes.

Notably, our definitions allow the adversary to request distinguishing keys in the security game, namely, keys that distinguish the challenge bit by simply decrypting the challenge ciphertext, so long as it returns them (and they pass the validity test) before it sees the challenge ciphertext. All our constructions satisfy this stronger definition, albeit with the restriction that only a bounded number of such keys be allowed to the adversary in the IBE and ABE (but not FE) security games.

Prior to our work, the notion of single decryptor encryption (SDE) has been studied in the context of PKE (Georgiou and Zhandry, Eprint 2020) and FE (Kitigawa and Nishimaki, Asiacrypt 2022) but all their constructions rely on strong assumptions including indistinguishability obfuscation. In contrast, our constructions do not require any additional assumptions, showing that PKE/IBE/ABE/FE can be upgraded to support secure key leasing for free.
Expand
Yanbo Chen
ePrint Report ePrint Report
A multi-signature scheme allows multiple signers to jointly sign a common message. Recently, two lattice-based two-round multi-signature schemes based on Dilithium-G were proposed: DOTT by Damgård, Orlandi, Takahashi, and Tibouchi (PKC'21) and MuSig-L by Boschini, Takahashi, and Tibouchi (CRYPTO'22).

In this work, we propose a lattice-based two-round multi-signature scheme called DualMS. Compared to DOTT, DualMS is likely to significantly reduce signature size, since it replaces an opening to a homomorphic trapdoor commitment with a Dilithium-G response in the signature. Compared to MuSig-L, concrete parameters show that DualMS has smaller public keys, signatures, and lower communication, while the first round cannot be preprocessed offline as in MuSig-L.

The main reason behind such improvements is a trapdoor-free "dual signing simulation" of our scheme. Signature simulation of DualMS is virtually identical the normal signing procedure and does not use lattice trapdoors like DOTT and MuSig-L.
Expand
Henri Gilbert, Rachelle Heim Boissier, Louiza Khati, Yann Rotella
ePrint Report ePrint Report
Duplex-based authenticated encryption modes with a sufficiently large key length are proven to be secure up to the birthday bound 2^(c/2), where c is the capacity. However this bound is not known to be tight and the complexity of the best known generic attack, which is based on multicollisions, is much larger: it reaches (2^c)/α where α represents a small security loss factor. There is thus an uncertainty on the true extent of security beyond the bound 2^(c/2) provided by such constructions. In this paper, we describe a new generic attack against several duplex-based AEAD modes. Our attack leverages random functions statistics and produces a forgery in time complexity O(2^(3c/4)) using negligible memory and no encryption queries. Furthermore, for some duplex-based modes, our attack recovers the secret key with a negligible amount of additional computations. Most notably, our attack breaks a security claim made by the designers of the NIST lightweight competition candidate Xoodyak. This attack is a step further towards determining the exact security provided by duplex-based constructions.
Expand
Sanjay Bhattacherjee, Julio Hernandez-Castro, Jack Moyler
ePrint Report ePrint Report
LLL-style lattice reduction algorithms employ two operations on ordered basis vectors - size reduction and reordering - to improve the basis quality by iteratively finding shorter and more orthogonal vectors. These algorithms typically have two design features. First, they work with a local or global measure of basis quality. Second, they reorder a subset of the basis vectors based on the basis quality before and after reordering. In this work, we introduce a new generic framework for designing lattice reduction algorithms. An algorithm in the framework makes greedy basis reordering choices globally on the whole basis in every iteration, based on a measure of basis quality. The greedy choice allows to attain the desired quality very quickly making the algorithms extremely efficient in practice. The framework is instantiated using two quality measures (1) the potential of the basis, and (2) the squared sum of its Gram-Schmidt orthogonalised vectors, to get two new basis reduction algorithms. We prove that both algorithms run in polynomial time and provide quality guarantees on their outputs. Our squared sum based algorithm has runtime close to LLL while outperforming BKZ-12 in output quality at higher dimensions. We have made our implementations and the experimental results public.
Expand

22 February 2023

Prague, Czechia, 10 September - 14 September 2023
CHES CHES
Event date: 10 September to 14 September 2023
Expand
Taipei, Taiwan, 29 November - 2 December 2023
TCC TCC
Event date: 29 November to 2 December 2023
Expand
Drew Stone
ePrint Report ePrint Report
In this paper, we present the Webb Protocol, a system for building and governing cross-chain applications that can support shared anonymity set functionality across a set of identical bridged systems on compatible blockchains. The Webb Protocol is composed of two major protocols that deal with storing, updating, and validating of data and state changes that occur on a bridge and that are relevant to replicate on each connected chain. State is efficiently verifiable through the use of merkle trees and privacy is provided using zero-knowledge proofs of membership. Together, one can create applications leveraging distributed state with private property testing capabilities. Both financial and non-financial applications are described as motivating examples within the paper.
Expand
Guangqiu Lv, Chenhui Jin, Ting Cui
ePrint Report ePrint Report
Differential-linear (DL) cryptanalysis has undergone remarkable advancements since it was first proposed by Langford and Hellman \cite{langford1994differential} in 1994. At CRYPTO 2022, Niu et al. studied the (rotational) DL cryptanalysis of $n$-bit modulo additions with 2 inputs, i.e., $\boxplus_2$, and presented a technique for evaluating the (rotational) DL correlation of ARX ciphers. However, the problem of how to automatically search for good DL trails on ARX with solvers was left open, which is the focus of this work.

In this paper, we solve this open problem through some techniques to reduce complexity and a transformation technique from matrix multiplication chain to Mixed Integer Quadratically-Constrained Programs (MIQCP). First, the computational complexity of the DL correlation of $\boxplus_2$ is reduced to approximately one-eighth of the state of art, which can be computed by a $2\times2$ matrix multiplication chain of the same length as before. Some methods to further reduce complexity in special cases have been studied. Additionally, we present how to compute the extended (rotational) DL correlations of $\boxplus_k$ for $k\ge 2$, where two output linear masks of the cipher pairs can be different. Second, to ensure that the existing solver Gurobi\footnote{The solver used in this paper is Gurobi, and some ready-made functions in Gurobi are also used, such as LOG\_2 and ABS. The source code is available at \url{https://}. } can compute DL correlations of $\boxplus_2$, we propose a method to transform an arbitrary matrix multiplication chain into a MIQCP, which forms the foundation of our automatic search of DL trails in ARX ciphers. Third, in ARX ciphers, we use a single DL trail under some explicit conditions to give a good estimate of the correlation, which avoids the exhaustion of intermediate differences. We then derive an automatic method for evaluating the DL correlations of ARX, which we apply to Alzette and some versions of SPECK. Experimentally verified results confirm the validity of our method, with the predicted correlations being close to the experimental ones. To the best of our knowledge, this method finds the best DL distinguishers for these ARX primitives currently. Furthermore, we presented the lowest time-complexity attacks against 12-14 rounds of SPECK32 to date.
Expand
Jordan Frery, Andrei Stoian, Roman Bredehoft, Luis Montero, Celia Kherfallah, Benoit Chevallier-Mames, Arthur Meyre
ePrint Report ePrint Report
Privacy enhancing technologies (PETs) have been proposed as a way to protect the privacy of data while still allowing for data analysis. In this work, we focus on Fully Homomorphic Encryption (FHE), a powerful tool that allows for arbitrary computations to be performed on encrypted data. FHE has received lots of attention in the past few years and has reached realistic execution times and correctness.

More precisely, we explain in this paper how we apply FHE to tree-based models and get state-of-the-art solutions over encrypted tabular data. We show that our method is applicable to a wide range of tree-based models, including decision trees, random forests, and gradient boosted trees, and has been implemented within the Concrete-ML library, which is open-source at https://github.com/zama-ai/concrete-ml. With a selected set of use-cases, we demonstrate that our FHE version is very close to the unprotected version in terms of accuracy.
Expand
Andrei Stoian, Jordan Frery, Roman Bredehoft, Luis Montero, Celia Kherfallah, Benoit Chevallier-Mames
ePrint Report ePrint Report
Fully homomorphic encryption (FHE) is an encryption method that allows to perform computation on encrypted data, without decryption. FHE preserves the privacy of the users of online services that handle sensitive data, such as health data, biometrics, credit scores and other personal information. A common way to provide a valuable service on such data is through machine learning and, at this time, Neural Networks are the dominant machine learning model for unstructured data.

In this work we show how to construct Deep Neural Networks (DNN) that are compatible with the constraints of TFHE, an FHE scheme that allows arbitrary depth computation circuits. We discuss the constraints and show the architecture of DNNs for two computer vision tasks. We benchmark the architectures using the Concrete stack, an open-source implementation of TFHE.
Expand
Junqing Gong, Ji Luo, Hoeteck Wee
ePrint Report ePrint Report
We present a pairing-based traitor tracing scheme for $N$ users with$$ |\mathsf{pk}| = |\mathsf{ct}| = O(N^{1/3}), \quad |\mathsf{sk}| = O(1). $$This is the first pairing-based scheme to achieve ${|\mathsf{pk}|\cdot|\mathsf{sk}|\cdot|\mathsf{ct}|=o(N)}$. Our construction relies on the (bilateral) $k$-Lin assumption, and achieves private tracing and full collusion resistance. Our result simultaneously improves upon the sizes of $\mathsf{pk},\mathsf{ct}$ in Boneh–Sahai–Waters [Eurocrypt '06] and the size of $\mathsf{sk}$ in Zhandry [Crypto '20], while further eliminating the reliance on the generic group model in the latter work.
Expand
Danping Shi, Siwei Sun, Ling Song, Lei Hu, Qianqian Yang
ePrint Report ePrint Report
The Demirci-Sel{\c{c}}uk meet-in-the-middle (DS-MITM) attack is a sophisticated variant of differential attacks. Due to its sophistication, it is hard to efficiently find the best DS-MITM attacks on most ciphers \emph{except} for AES. Moreover, the current automatic tools only capture the most basic version of DS-MITM attacks, and the critical techniques developed for enhancing the attacks (e.g., differential enumeration and key-dependent-sieve) still rely on manual work. In this paper, we develop a full-fledged automatic framework integrating all known techniques (differential enumeration, key-dependent-sieve, and key bridging, etc) for the DS-MITM attack that can produce key-recovery attacks directly rather than only search for distinguishers. Moreover, we develop a new technique that is able to exploit partial key additions to generate more linear relations beneficial to the attacks. We apply the framework to the SKINNY family of block ciphers and significantly improved results are obtained. In particular, all known DS-MITM attacks on the respective versions of SKINNY are improved by at least 2 rounds, and the data, memory, or time complexities of some attacks are reduced even compared to previous best attacks penetrating less rounds.
Expand
Kaihua Qin, Jens Ernstberger, Liyi Zhou, Philipp Jovanovic, Arthur Gervais
ePrint Report ePrint Report
Liquidations in DeFi are both a blessing and a curse — whereas liquidations prevent lenders from capital loss, they simultaneously lead to liquidation spirals and system-wide failures. Since most lending and borrowing protocols assume liquidations are indispensable, there is an increased interest in alternative constructions that prevent immediate systemic-failure under uncertain circumstances.

In this work, we introduce reversible call options, a novel financial primitive that enables the seller of a call option to terminate it before maturity. We apply reversible call options to lending in DeFi and devise Miqado, a protocol for lending platforms to replace the liquidation mechanisms. To the best of our knowledge, Miqado is the first protocol that actively mitigates liquidations to reduce the risk of liquidation spirals. Instead of selling collateral, Miqado incentivizes external entities, so-called supporters, to top-up a borrowing position and grant the borrower additional time to rescue the debt. Our simulation shows that Miqado reduces the amount of liquidated collateral by 89.82% in a worst-case scenario.
Expand
Zhenzhen Bao, Seongha Hwang, Akiko Inoue, Byeonghak Lee, Jooyoung Lee, Kazuhiko Minematsu
ePrint Report ePrint Report
We present a new block cipher mode of operation for authenticated encryption (AE), dubbed XOCB, that has the following features: (1) beyond-birthday-bound (BBB) security based on the standard pseudorandom assumption of the internal block cipher if the maximum block length is sufficiently smaller than the birthday bound, (2) rate-1 computation, and (3) supporting any block cipher with any key length. Namely, XOCB has effectively the same efficiency as the seminal OCB while having stronger quantitative security without any change in the security model or the required primitive in OCB. Although numerous studies have been conducted in the past, our XOCB is the first mode of operation to achieve these multiple goals simultaneously.
Expand
James Bartusek, Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
ePrint Report ePrint Report
We show how to obfuscate pseudo-deterministic quantum circuits, assuming the quantum hardness of learning with errors (QLWE) and post-quantum virtual black-box (VBB) obfuscation for classical circuits. Given the classical description of a quantum circuit $Q$, our obfuscator outputs a quantum state $\ket{\widetilde{Q}}$ that can be used to evaluate $Q$ repeatedly on arbitrary inputs.

Instantiating the VBB obfuscator for classical circuits with any candidate post-quantum indistinguishability obfuscator gives us the first candidate construction of indistinguishability obfuscation for all polynomial-size pseudo-deterministic quantum circuits. In particular, our scheme is the first candidate obfuscator for a class of circuits that is powerful enough to implement Shor's algorithm (SICOMP 1997).

Our approach follows Bartusek and Malavolta (ITCS 2022), who obfuscate null quantum circuits by obfuscating the verifier of an appropriate classical verification of quantum computation (CVQC) scheme. We go beyond null circuits by constructing a publicly-verifiable CVQC scheme for quantum partitioning circuits, which can be used to verify the evaluation procedure of Mahadev's quantum fully-homomorphic encryption scheme (FOCS 2018). We achieve this by upgrading the one-time secure scheme of Bartusek (TCC 2021) to a fully reusable scheme, via a publicly-decodable Pauli functional commitment, which we formally define and construct in this work. This commitment scheme, which satisfies a notion of binding against committers that can access the receiver's standard and Hadamard basis decoding functionalities, is constructed by building on techniques of Amos, Georgiou, Kiayias, and Zhandry (STOC 2020) introduced in the context of equivocal but collision-resistant hash functions.
Expand
Usman Ali, Hamza Omar, Chujiao Ma, Vaibhav Garg, Omar Khan
ePrint Report ePrint Report
Hardware-based Root of Trust (HRT) is considered the gold standard for bootstrapping trust in secure computing. This paper analyzes HRT implementations across state-of-the-art TEEs and differentiates HRT implementation across two dimensions: 1) Security Properties & Threats and 2) Hardware Capabilities. Later, this work analyzes and compares 1) Intel SGX, 2) ARM TrustZone, 3) NXP Trust Architecture, 4) AMD SEV, 5) Microsoft Pluton, and 6) Apple T2 HRTs in terms of threats, security properties, and capabilities.
Expand
◄ Previous Next ►