IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
02 December 2024
Nicholas Brandt, Mia Filić, Sam A. Markelon
ePrint Report
Key Transparency (KT) systems have emerged as a critical technology for securely distributing and verifying the correctness of public keys used in end-to-end encrypted messaging services. Despite substantial academic interest, increased industry adoption, and IETF standardization efforts, KT systems lack a holistic and formalized security model, limiting their resilience to practical threats and constraining future development. In this paper, we introduce the first cryptographically sound formalization of KT as an ideal functionality, clarifying the assumptions, security properties, and potential vulnerabilities of deployed KT systems. We identify a significant security concern — a possible impersonation attack by a malicious service provider — and propose a backward-compatible solution. Additionally, we address a core scalability bottleneck by designing and implementing a novel, privacy-preserving verifiable Bloom filter (VBF) that significantly improves KT efficiency without compromising security. Experimental results demonstrate the effectiveness of our approach, marking a step forward in both the theoretical and practical deployment of scalable KT solutions.
Yackolley Amoussou-Guenou, Maurice Herlihy, Maria Potop Butucaru
ePrint Report
The paper promotes a new design paradigm for Byzantine tolerant distributed algorithms using trusted abstractions (oracles) specified in a functional manner. The contribution of the paper is conceptual. The objective here is to design distributed fundamental algorithms such as reliable broadcast and asynchronous byzantine consensus using trusted execution environments and to help designers to compare various solutions on a common ground.
In this framework we revisit the Bracha's seminal work on Asynchronous Byzantine Consensus. Our solution uses trusted monotonic counters abstraction and tolerates $t$ Byzantine processes in a system with $n$ processes, $n \geq 2t+1$. The keystone of our construction is a novel and elegant Byzantine Reliable Broadcast algorithm resilient to $t
Jiacheng Gao, Yuan Zhang, Sheng Zhong
ePrint Report
Shuffle is a frequently used operation in secure multiparty computations, with various applications, including joint data analysis and anonymous communication systems. Most existing MPC shuffle protocols are constructed from MPC permutation protocols, which allows a party to securely apply its private permutation to an array of $m$ numbers shared among all $n$ parties. Following a ``permute-in-turn'' paradigm, these protocols result in $\Omega(n^2m)$ complexity in the semi-honest setting. Recent works have significantly improved efficiency and security by adopting a two-phase solution. Specifically, Eskandarian and Boneh demonstrate how to construct MPC shuffle protocols with linear complexity in both semi-honest and malicious adversary settings. However, a more recent study by Song et al. reveals that Eskandarian and Boneh's protocol fails to achieve malicious security. Consequently, designing an MPC shuffle protocol with linear complexity and malicious security remains an open question.
In this paper, we address this question by presenting the first general construction of MPC shuffle protocol that is maliciously secure and has linear online communication and computation complexity, utilizing black-box access to secure arithmetic MPC primitives and MPC permutation protocol. When instantiating our construction with the SPDZ framework and the best existing malicious secure MPC shuffle, our construction only slightly increases the offline overhead compared to the semi-honest secure version, and thus achieve a linear online phase almost for free. As our constructions requires only black-box access to basic secure MPC primitives and permutation protocols, they are compatible with and can be integrated to most modern MPC frameworks. We provide formal security proofs for both semi-honest and malicious settings, demonstrating that our maliciously secure construction can achieve universally composable security. Experimental results indicate that our construction significantly enhances online performance while maintaining a moderate increase in offline overhead. Given that shuffle is a frequently used primitive in secure multiparty computation, we anticipate that our construction will accelerate many real-world MPC applications.
Sofiane Azogagh, Zelma Aubin Birba, Marc-Olivier Killijian, Félix Larose-Gervais
ePrint Report
In this paper we present RevoLUT, a library implemented in Rust that reimagines the use of Look-Up-Tables (LUT) beyond their conventional role in function encoding, as commonly used in TFHE's programmable boostrapping. Instead, RevoLUT leverages LUTs as first class objects, enabling efficient oblivious operations such as array access, elements sorting and permutation directly within the table. This approach supports oblivious algortithm, providing a secure, privacy-preserving solution for handling sensitive data in various applications.
29 November 2024
Aparna Gupte, Jiahui Liu, Justin Raizes, Bhaskar Roberts, Vinod Vaikuntanathan
ePrint Report
One-time programs (Goldwasser, Kalai and Rothblum, CRYPTO 2008) are functions that can be run on any single input of a user's choice, but not on a second input. Classically, they are unachievable without trusted hardware, but the destructive nature of quantum measurements seems to provide a quantum path to constructing them. Unfortunately, Broadbent, Gutoski and Stebila showed that even with quantum techniques, a strong notion of one-time programs, similar to ideal obfuscation, cannot be achieved for any non-trivial quantum function. On the positive side, Ben-David and Sattath (Quantum, 2023) showed how to construct a one-time program for a certain (probabilistic) digital signature scheme, under a weaker notion of one-time program security. There is a vast gap between achievable and provably impossible notions of one-time program security, and it is unclear what functionalities are one-time programmable under the achievable notions of security.
In this work, we present new, meaningful, yet achievable definitions of one-time program security for *probabilistic* classical functions. We show how to construct one time programs satisfying these definitions for all functions in the classical oracle model and for constrained pseudorandom functions in the plain model. Finally, we examine the limits of these notions: we show a class of functions which cannot be one-time programmed in the plain model, as well as a class of functions which appears to be highly random given a single query, but whose one-time program form leaks the entire function even in the oracle model.
In this work, we present new, meaningful, yet achievable definitions of one-time program security for *probabilistic* classical functions. We show how to construct one time programs satisfying these definitions for all functions in the classical oracle model and for constrained pseudorandom functions in the plain model. Finally, we examine the limits of these notions: we show a class of functions which cannot be one-time programmed in the plain model, as well as a class of functions which appears to be highly random given a single query, but whose one-time program form leaks the entire function even in the oracle model.
George Teseleanu
ePrint Report
In this paper, we present a generalization of Schnorr's digital signature that allows a user to simultaneously sign multiple messages. Compared to Schnorr's scheme that concatenates messages and then signs them, the new protocol takes advantage of multiple threads to process messages in parallel. We prove the security of our novel protocol and discuss different variants of it. Last but not least, we extend Ferradi et al.'s co-signature protocol by exploiting the inherent parallelism of our proposed signature scheme.
Yanyi Liu, Noam Mazor, Rafael Pass
ePrint Report
Witness encryption (WE) (Garg et al, STOC’13) is a powerful cryptographic primitive that is closely related to the notion of indistinguishability obfuscation (Barak et, JACM’12, Garg et al, FOCS’13). For a given NP-language $L$, WE for $L$ enables encrypting a message $m$ using an instance $x$ as the public-key, while ensuring that efficient decryption is possible by anyone possessing a witness for $x \in L$, and if $x\notin L$, then the encryption is hiding. We show that this seemingly sophisticated primitive is equivalent to a communication-efficient version of one of the most classic cryptographic primitives—namely that of a zero-knowledge argument (Goldwasser et al, SIAM’89, Brassard et al, JCSS’88): for any NP-language $L$, the following are equivalent:
- There exists a witness encryption for L;
- There exists a laconic (i.e., the prover communication is bounded by $O(\log n)$) special-honest verifier zero-knowledge (SHVZK) argument for $L$.
Our approach is inspired by an elegant (one-sided) connection between (laconic) zero-knowledge arguments and public-key encryption established by Berman et al (CRYPTO’17) and Cramer-Shoup (EuroCrypt’02).
Yanyi Liu, Noam Mazor, Rafael Pass
ePrint Report
We consider a generalization of the Learning With Error problem, referred to as the white-box learning problem: You are given the code of a sampler that with high probability produces samples of the form $y,f(y)+\epsilon$ where is small, and $f$ is computable in polynomial-size, and the computational task consist of outputting a polynomial-size circuit $C$ that with probability, say, $1/3$ over a new sample $y$? according to the same distributions, approximates $f(y)$ (i.e., $|C(y)-f(y)$ is small). This problem can be thought of as a generalizing of the Learning with Error Problem (LWE) from linear functions $f$ to polynomial-size computable functions.
We demonstrate that worst-case hardness of the white-box learning problem, conditioned on the instances satisfying a notion of computational shallowness (a concept from the study of Kolmogorov complexity) not only suffices to get public-key encryption, but is also necessary; as such, this yields the first problem whose worst-case hardness characterizes the existence of public-key encryption. Additionally, our results highlights to what extent LWE “overshoots” the task of public-key encryption.
We complement these results by noting that worst-case hardness of the same problem, but restricting the learner to only get black-box access to the sampler, characterizes one-way functions.
We demonstrate that worst-case hardness of the white-box learning problem, conditioned on the instances satisfying a notion of computational shallowness (a concept from the study of Kolmogorov complexity) not only suffices to get public-key encryption, but is also necessary; as such, this yields the first problem whose worst-case hardness characterizes the existence of public-key encryption. Additionally, our results highlights to what extent LWE “overshoots” the task of public-key encryption.
We complement these results by noting that worst-case hardness of the same problem, but restricting the learner to only get black-box access to the sampler, characterizes one-way functions.
Javier Gomez-Martinez, Dimitrios Vasilopoulos, Pedro Moreno-Sanchez, Dario Fiore
ePrint Report
In this work, we introduce Modular Algebraic Proof Contingent Payment (MAPCP), a novel zero-knowledge contingent payment (ZKCP) construction. Unlike previous approaches, MAPCP is the first that simultaneously avoids using zk-SNARKs as the tool for zero-knowledge proofs and HTLC contracts to atomically exchange a secret for a payment. As a result, MAPCP sidesteps the common reference string (crs) creation problem and is compatible with virtually any cryptocurrency, even those with limited or no smart contract support. Moreover, MAPCP contributes to fungibility, as its payment transactions blend seamlessly with standard cryptocurrency payments.
We analyze the security of MAPCP and demonstrate its atomicity, meaning that, (i) the buyer gets the digital product after the payment is published in the blockchain (buyer security); and (ii) the seller receives the payment if the buyer gets access to the digital product (seller security). Moreover, we present a construction of MAPCP in a use case where a customer pays a notary in exchange for a document signature.
Puja Mondal, Suparna Kundu, Supriya Adhikary, Angshuman Karmakar
ePrint Report
CROSS is a code-based post-quantum digital signature scheme based on a zero-knowledge (ZK) framework. It is a second-round candidate of the National Institute of Standards and Technology’s additional call for standardizing post-quantum digital signatures. The memory footprint of this scheme is prohibitively large, especially for small embedded devices. In this work, we propose various techniques to reduce the memory footprint of the key generation, signature generation, and verification by as much as 50%, 52%, and 74%, respectively, on an ARM Cortex-M4 device. Moreover, our memory-optimized implementations adapt the countermeasure against the recently proposed (ASIACRYPT-24) fault attacks against the ZK-based signature schemes.
Akiko Inoue, Ashwin Jha, Bart Mennink, Kazuhiko Minematsu
ePrint Report
Authenticated encryption schemes guarantee that parties who share a secret key can communicate confidentially and authentically. One of the most popular and widely used authenticated encryption schemes is GCM by McGrew and Viega (INDOCRYPT 2004). However, despite its simplicity and efficiency, GCM also comes with its deficiencies, most notably devastating insecurity against nonce-misuse and imperfect security for short tags.
Very recently, Campagna, Maximov, and Mattsson presented GCM-SST (IETF Internet draft 2024), a variant of GCM that uses a slightly more involved universal hash function composition, and claimed that this construction achieves stronger security in case of tag truncation. GCM-SST already received various interest from industries (e.g., Amazon and Ericsson) and international organizations (e.g., IETF and 3GPP) but it has not received any generic security analysis to date.
In this work, we fill this gap and perform a detailed security analysis of GCM-SST. In particular, we prove that GCM-SST achieves security in the nonce-misuse resilience model of Ashur et al.~(CRYPTO 2017), roughly guaranteeing that even if nonces are reused, evaluations of GCM-SST for new nonces are secure. Our security bound also verified the designers' (informal) claim on tag truncation. Additionally, we investigate and describe possibilities to optimize the hashing in GCM-SST further, and we describe a universal forgery attack in a complexity of around $2^{33.6}$, improving over an earlier attack of $2^{40}$ complexity of Lindell, when the tag is 32 bits.
Anup Kumar Kundu, Shibam Ghosh, Aikata Aikata, Dhiman Saha
ePrint Report
In this work, we introduce ToFA, the first fault attack (FA) strategy that attempts to leverage the classically well-known idea of impossible differential cryptanalysis to mount practically verifiable attacks on bit-oriented ciphers like GIFT and BAKSHEESH. The idea used stems from the fact that truncated differential paths induced due to fault injection in certain intermediate rounds of the ciphers lead to active SBox-es in subsequent rounds whose inputs admit specific truncated differences. This leads to a (multi-round) impossible differential distinguisher, which can be incrementally leveraged for key-guess elimination via partial decryption. The key-space reduction further exploits the multi-round impossibility, capitalizing on the relations due to the quotient-remainder (QR) groups of the GIFT and BAKSHEESH linear layer, which increases the filtering capability of the distinguisher. Moreover, the primary observations made in this work are independent of the actual SBox. Clock glitch based fault attacks were mounted on 8-bit implementations of GIFT-64/GIFT-128 using a ChipWhisperer Lite board on an 8-bit ATXmega128D4-AU micro-controller. Unique key recovery was achieved for GIFT-128 with 3 random byte faults, while for GIFT-64, key space was reduced to $2^{32}$, the highest achievable for GIFT-64, with a single level fault due to its key-schedule. This work also reports the highest fault injection penetration for any variant of GIFT and BAKSHEESH. Finally, this work reiterates the role of classical cryptanalysis strategies in fault vulnerability assessment while leading to the most efficient fault attacks on GIFT.
Shengyuan Xu, Siwei Chen, Xiutao Feng, Zejun Xiang, Xiangyong Zeng
ePrint Report
BAKSHEESH is a lightweight block cipher following up the well-known cipher GIFT-128, which uses a 4-bit SBox that has a non-trivial Linear Structure (LS). Also, the Sbox requires a low number of AND gates that makes BAKSHEESH stronger to resist the side channel attacks compared to GIFT-128. In this paper, we give the first third-party security analysis of BAKSHEESH from the traditional attacks perspective: integral, differential and linear attacks. Firstly, we propose a framework for integral attacks based on the properties of BAKSHEESH's Sbox and its inverse. By this, we achieve the 9- and 10-round practical key-recovery attacks, and give a 15-round theoretical attack. Secondly, we re-evaluate the security bound against differential cryptanalysis, correcting two errors from the original paper and presenting a key-recovery attack for 19 rounds. At last, for linear cryptanalysis, we develop an automated model for key-recovery attacks and then demonstrate a key-recovery attack for 21 rounds. We stress that our attacks cannot threaten the full-round BAKSHEESH, but give a deep understanding on its security.
Simon Judd
ePrint Report
We present EndGame, a novel blockchain architecture that achieves succinctness through Reed-Solomon accumulation schemes. Our construction enables constant-time verification of blockchain state while maintaining strong security properties. We demonstrate how to efficiently encode blockchain state transitions using Reed-Solomon codes and accumulate proofs of state validity using the ARC framework. Our protocol achieves optimal light client verification costs and supports efficient state management without trusted setup.
Giulia Gaggero, Elisa Gorla
ePrint Report
In this paper, we discuss what it means for a polynomial system to be random and how hard it is to solve a random polynomial system. We propose an algebraic definition of randomness, that we call algebraic randomness. Using a conjecture from commutative algebra, we produce a sharp upper bound for the degree of regularity, hence the complexity of solving an algebraically random polynomial system by Groebner bases methods. As a proof of concept, we apply our result to Rainbow and GeMSS and show that these systems are far from being algebraically random.
Jianjun HU
ePrint Report
In 2016,Petit et al. first studied the implementation of the index calculus method on elliptic curves in prime finite fields, and in 2018, Momonari and Kudo et al. improved algorithm of Petit et al. This paper analyzes the research results of Petit, Momonari and Kudo, and points out the existing problems of the algorithm. Therefore, with the help of sum polynomial function and index calculus, a pseudo-index calculus algorithm for elliptic curves discrete logarithm problem over prime finite fields is proposed, and its correctness is analyzed and verified. It is pointed out that there is no subexponential time method for solving discrete logarithms on elliptic curves in the finite fields of prime numbers, or at least in the present research background, there is no method for solving discrete logarithms in subexponential time.
Brandon "Cryptskii" Ramsay
ePrint Report
Presenting a formal analysis of the Overpass protocol's hierarchical state channel architecture, focusing on its unique approach to state synchronization and tamper detection through cryptographic primitives. The protocol achieves global state consistency without traditional consensus mechanisms by leveraging Sparse Merkle Trees (SMTs), zero-knowledge proofs, and a deterministic hierarchical structure. We provide mathematical proofs of security properties and analyze the protocol's efficiency in terms of computational and communication complexity.
Antonina Bondarchuk, Olive Chakraborty, Geoffroy Couteau, Renaud Sirdey
ePrint Report
This paper focuses on the issue of reducing the bandwidth requirement for FHE ciphertext transmission. While this issue has been extensively studied from the uplink viewpoint (transmission of encrypted inputs towards a FHE calculation) where several approaches exist to essentially cancel FHE ciphertext expansion, the downlink case (transmission of encrypted results towards an end-user) has been the object of much less attention. In this paper, we address this latter issue with a particular focus on the TFHE scheme for which we investigate a number of methods including several approaches for switching to more compact linearly homomorphic schemes, reducing the precision of T(R)LWE coefficients (while maintaining acceptable probabilities of decryption errors) and others. We also investigate how to use these methods in combination, depending on the number of FHE results to transmit. We further perform extensive experiments demonstrating that the downlink FHE ciphertext expansion factor can be practically reduced to values below 10, depending on the setup, with little additional computational burden.
Cas Cremers, Aleksi Peltonen, Mang Zhao
ePrint Report
Despite decades of work on threshold signature schemes, there is still limited agreement on their desired properties and threat models. In this work we significantly extend and repair previous work to give a unified syntax for threshold signature schemes and a new hierarchy of security notions for them. Moreover, our new hierarchy allows us to develop an automated analysis approach for protocols that use threshold signatures, which can discover attacks on protocols that exploit the details of the security notion offered by the used scheme, which can help choose the correct security notion (and scheme that fulfills it) that is required for a specific protocol.
Unlike prior work, our syntax for threshold signatures covers both non-interactive and interactive signature schemes with any number of key generation and signing rounds, and our hierarchy of security notions additionally includes elements such as various types of corruption and malicious key generation. We show the applicability of our hierarchy by selecting representative threshold signature schemes from the literature, extracting their core security features, and categorizing them according to our hierarchy. As a side effect of our work, we show through a counterexample that a previous attempt at building a unified hierarchy of unforgeability notions does not meet its claimed ordering, and show how to repair it without further restricting the scope of the definitions.
Based on our syntax and hierarchy, we develop the first systematic, automated analysis method for higher-level protocols that use threshold signatures. We use a symbolic analysis framework to abstractly model threshold signature schemes that meet security notions in our hierarchy, and implement this in the Tamarin prover. Given a higher-level protocol that uses threshold signatures, and a security notion from our hierarchy, our automated approach can find attacks on such protocols that exploit the subtle differences between elements of our hierarchy. Our approach can be used to formally analyze the security implications of implementing different threshold signature schemes in higher-level protocols.
Unlike prior work, our syntax for threshold signatures covers both non-interactive and interactive signature schemes with any number of key generation and signing rounds, and our hierarchy of security notions additionally includes elements such as various types of corruption and malicious key generation. We show the applicability of our hierarchy by selecting representative threshold signature schemes from the literature, extracting their core security features, and categorizing them according to our hierarchy. As a side effect of our work, we show through a counterexample that a previous attempt at building a unified hierarchy of unforgeability notions does not meet its claimed ordering, and show how to repair it without further restricting the scope of the definitions.
Based on our syntax and hierarchy, we develop the first systematic, automated analysis method for higher-level protocols that use threshold signatures. We use a symbolic analysis framework to abstractly model threshold signature schemes that meet security notions in our hierarchy, and implement this in the Tamarin prover. Given a higher-level protocol that uses threshold signatures, and a security notion from our hierarchy, our automated approach can find attacks on such protocols that exploit the subtle differences between elements of our hierarchy. Our approach can be used to formally analyze the security implications of implementing different threshold signature schemes in higher-level protocols.
Aikata Aikata, Daniel Sanz Sobrino, Sujoy Sinha Roy
ePrint Report
Fully Homomorphic Encryption (FHE) enables privacy-preserving computation but imposes significant computational and communication overhead on the client for the public-key encryption. To alleviate this burden, previous works have introduced the Hybrid Homomorphic Encryption (HHE) paradigm, which combines symmetric encryption with homomorphic decryption to enhance performance for the FHE client. While early HHE schemes focused on binary data, modern versions now support integer prime fields, improving their efficiency for practical applications such as secure machine learning.
Despite several HHE schemes proposed in the literature, there has been no comprehensive study evaluating their performance or area advantages over FHE for encryption tasks. This paper addresses this gap by presenting the first implementation of an HHE scheme- PASTA. It is a symmetric encryption scheme over integers designed to facilitate fast client encryption and homomorphic symmetric decryption on the server. We provide performance results for both FPGA and ASIC platforms, including a RISC-V System-on-Chip (SoC) implementation on a low-end 130nm ASIC technology, which achieves a 43–171$\times$ speedup compared to a CPU. Additionally, on high-end 7nm and 28nm ASIC platforms, our design demonstrates a 97$\times$ speedup over prior public-key client accelerators for FHE. We have made our design public and benchmarked an application to support future research.
Despite several HHE schemes proposed in the literature, there has been no comprehensive study evaluating their performance or area advantages over FHE for encryption tasks. This paper addresses this gap by presenting the first implementation of an HHE scheme- PASTA. It is a symmetric encryption scheme over integers designed to facilitate fast client encryption and homomorphic symmetric decryption on the server. We provide performance results for both FPGA and ASIC platforms, including a RISC-V System-on-Chip (SoC) implementation on a low-end 130nm ASIC technology, which achieves a 43–171$\times$ speedup compared to a CPU. Additionally, on high-end 7nm and 28nm ASIC platforms, our design demonstrates a 97$\times$ speedup over prior public-key client accelerators for FHE. We have made our design public and benchmarked an application to support future research.