IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
30 December 2024
Paola de Perthuis, Thomas Peters
ePrint Report
Traceable Receipt-free Encryption (TREnc) has recently been introduced as a verifiable public-key encryption primitive endowed with a unique security model. In a nutshell, TREnc allows randomizing ciphertexts in transit in order to remove any subliminal information up to a public trace that ensures the non-malleability of the underlying plaintext. A remarkable property of TREnc is the indistinguishability of the randomization of chosen ciphertexts against traceable chosen-ciphertext attacks (TCCA). The main application lies in voting systems by allowing voters to encrypt their votes, tracing whether a published ballot takes their choices into account, and preventing them from proving how they
voted. While being a very promising primitive, the few existing TREnc mechanisms solely rely on discrete-logarithm related assumptions making them vulnerable to the well-known record-now/decrypt-later attack in the wait of quantum computers.
We address this limitation by building the first TREnc whose privacy withstands the advent of quantum adversaries in the future. To design our construction, we first generalize the original TREnc primitive that is too restrictive to be easily compatible with built-in lattice-based semantically-secure encryption. Our more flexible model keeps all the ingredients generically implying receipt-free voting. Our instantiation relies on Ring Learning With Errors (RLWE) with pairing-based statistical zero-knowledge simulation sound proofs from Groth-Sahai, and further enjoys a public-coin common reference string removing the need of a trusted setup.
27 December 2024
Mallory Knodel, Andrés Fábrega, Daniella Ferrari, Jacob Leiken, Betty Li Hou, Derek Yen, Sam de Alfaro, Kyunghyun Cho, Sunoo Park
ePrint Report
End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns.
This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI “assistants” within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features.
This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI “assistants” within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features.
Mallory Knodel, Sofía Celi, Olaf Kolkman, Gurshabad Grover
ePrint Report
This document provides a definition of end-to-end encryption (E2EE). End-to-end encryption is an application of cryptographic mechanisms to provide security and privacy to communication between endpoints. Such communication can include messages, email, video, audio, and other forms of media. E2EE provides security and privacy through confidentiality, integrity, authenticity and forward secrecy for communication amongst people.
Alexander Frolov
ePrint Report
There are a variety of techniques for implementing read/write memory inside of zero-knowledge proofs and validating consistency of memory accesses. These techniques are generally implemented with the goal of implementing a RAM or ROM. In this paper, we present memory techniques for more specialized data structures: queues and stacks. We first demonstrate a technique for implementing queues in arithmetic circuits that requires 3 multiplication gates and 1 advice value per read and 2 multiplication gates per write. This is based on using Horner's Rule to evaluate 2 polynomials at random points and check that the values read from the queue are equal to the values written to the queue as vectors. Next, we present a stack scheme based on an optimized version of the RAM scheme of Yang and Heath that requires 5 multiplication gates and 4 advice values per read and 2 multiplication gates per write. This optimizes the RAM scheme by observing that reads and writes to a stack are already "paired" which avoids the need for inserting dummy operations for each access as in a stack.
We also introduce a different notion of "multiplexing" or "operation privacy" that is better suited to the use case of stacks and queues. All of the techniques we provide are based on evaluating polynomials at random points and using randomly evaluated polynomials as universal hash functions to check set/vector equality.
Mila Anastasova, Reza Azarderakhsh, Mehran Mozaffari Kermani
ePrint Report
To provide safe communication across an unprotected medium such as the internet, network protocols are being established. These protocols employ public key techniques to perform key exchange and authentication. Transport Layer Security (TLS) is a widely used network protocol that enables secure communication between a server and a client. TLS is employed in billions of transactions per second. Contemporary protocols depend on traditional methods that utilize the computational complexity of factorization or (elliptic curve) logarithm mathematics problems. The ongoing advancement in the processing power of classical computers requires an ongoing increase in the security level of the underlying cryptographic algorithms. This study focuses on the analysis of Curve448 and Edwards curve Ed448, renowned for their superior security features that offer a 224-bit level of security as part of the TLSv1.3 protocol. The exponential advancement of quantum computers, however, presents a substantial threat to secure network communication that depends on classical crypto schemes, irrespective of their degree of security. Quantum computers have the capability to resolve these challenges within a feasible timeframe. In order to successfully transition to Post-Quantum secure network protocols, it is imperative to concurrently deploy both classical and post-quantum algorithms. This is done to fulfill the requirements of both enterprises and governments, while also instilling more assurance in the reliability of the post-quantum systems. This paper presents a detailed hybrid implementation architecture of the TLSv1.3 network protocol. We showcase the first deployment of Curve448 and Crystals-Kyber for the purpose of key exchanging, and Ed448 and Crystals-Dilithium for verifying the authenticity of entities and for X.509 Public Key Infrastructure (PKI). We rely upon the widely used OpenSSL library and the specific wolfSSL library for embedded devices to provide our results for server and client applications.
Yulin Zhao, Zhiguo Wan, Zhangshuang Guan
ePrint Report
Federated Learning (FL) enables collaborative model training while preserving data privacy by avoiding the sharing of raw data. However, in large-scale FL systems, efficient secure aggregation and dropout handling remain critical challenges. Existing state-of-the-art methods, such as those proposed by Liu et al. (UAI'22) and Li et al. (ASIACRYPT'23), suffer from prohibitive communication overhead, implementation complexity, and vulnerability to poisoning attacks. Alternative approaches that utilize partially connected graph structures (resembling client grouping) to reduce communication costs, such as Bell et al. (CCS'20) and ACORN (USENIX Sec'23), face the risk of adversarial manipulation during the graph construction process.
To address these issues, we propose ClusterGuard, a secure clustered aggregation scheme for federated learning. ClusterGuard leverages Verifiable Random Functions (VRF) to ensure fair and transparent cluster selection and employs a lightweight key-homomorphic masking mechanism, combined with efficient dropout handling, to achieve secure clustered aggregation. Furthermore, ClusterGuard incorporates a dual filtering mechanism based on cosine similarity and norm to effectively detect and mitigate poisoning attacks.
Extensive experiments on standard datasets demonstrate that ClusterGuard achieves over 2x efficiency improvement compared to advanced secure aggregation methods. Even with 20% of clients being malicious, the trained model maintains accuracy comparable to the original model, outperforming state-of-the-art robustness solutions. ClusterGuard provides a more efficient, secure, and robust solution for practical federated learning.
To address these issues, we propose ClusterGuard, a secure clustered aggregation scheme for federated learning. ClusterGuard leverages Verifiable Random Functions (VRF) to ensure fair and transparent cluster selection and employs a lightweight key-homomorphic masking mechanism, combined with efficient dropout handling, to achieve secure clustered aggregation. Furthermore, ClusterGuard incorporates a dual filtering mechanism based on cosine similarity and norm to effectively detect and mitigate poisoning attacks.
Extensive experiments on standard datasets demonstrate that ClusterGuard achieves over 2x efficiency improvement compared to advanced secure aggregation methods. Even with 20% of clients being malicious, the trained model maintains accuracy comparable to the original model, outperforming state-of-the-art robustness solutions. ClusterGuard provides a more efficient, secure, and robust solution for practical federated learning.
Hao Kang, Mengce Zheng
ePrint Report
The RSA (Rivest-Shamir-Adleman) cryptosystem is a fundamental algorithm of public key cryptography and is widely used across various information domains. For an RSA modulus represented as $N = pq$, with its factorization remaining unknown, security vulnerabilities arise when attackers exploit the key equation $ed-k(p-1)(q-1)=1$. To enhance the security, Murru and Saettone introduced cubic Pell RSA --- a variant of RSA based on the cubic Pell equation, where the key equation becomes $ed-k(p^2+p+1)(q^2+q+1)=1$. In this paper, we further investigate the security implications surrounding the generalized key equation $eu-(p^2+p+1)(q^2+q+1)v=w$. We present a novel attack strategy aimed at recovering the prime factors $p$ and $q$ under specific conditions satisfied by $u$, $v$, and $w$. Our generalized attack employs lattice-based Coppersmith's techniques and extends several previous attack scenarios, thus deepening the understanding of mathematical cryptanalysis.
Mengce Zheng, Wei Yan
ePrint Report
This paper investigates the Mersenne number-based $\mathsf{AJPS}$ cryptosystem, with a particular focus on its associated hard problem. Specifically, we aim to enhance the existing lattice-based attack on the Mersenne low Hamming ratio search problem. Unlike the previous approach of directly employing lattice reduction algorithm, we apply the lattice-based method to solving polynomial equations derived from the above problem. We extend the search range for vulnerabilities in weak keys and increase the success probability of key recovery attack. To validate the efficacy and accuracy of our proposed improvements, we conduct numerical computer experiments. These experiments serve as a concrete validation of the practicality and effectiveness of our improved attack.
Elena Dubrova
ePrint Report
Side-channel attacks exploit information leaked through non-primary channels, such as power consumption, electromagnetic emissions, or timing, to extract sensitive data from cryptographic devices. Over the past three decades, side-channel analysis has evolved into a mature research field with well-established methodologies for analyzing standard cryptographic algorithms like the Advanced Encryption Standard (AES). However, the integration of side-channel analysis with formal methods remains relatively unexplored. In this paper, we present a hybrid attack on AES that combines side-channel analysis with SAT. We model AES as a SAT problem and leverage hints of the input and output values of the S-boxes, extracted via profiled deep learning-based power analysis, to solve it. Experimental results on an ATXmega128D4 MCU implementation of AES-128 demonstrate that the SAT-assisted approach consistently recovers the full encryption key from a single trace, captured from devices different from those used for profiling, within one hour. In contrast, without SAT assistance, the success rate remains below 80% after 26 hours of key enumeration.
Ehsan Ebrahimi, Anshu Yadav
ePrint Report
A universal thresholdizer (UT), constructed from a threshold fully homomorphic encryption by Boneh et. al
, Crypto 2018, is a general framework for universally thresholdizing many cryptographic schemes. However,
their framework is insufficient to construct strongly secure threshold schemes, such as threshold signatures
and threshold public-key encryption, etc.
In this paper, we strengthen the security definition for a universal thresholdizer and propose a scheme
which satisfies our stronger security notion. Our UT scheme is an improvement of Boneh et. al ’s construction
at the level of threshold fully homomorphic encryption using a key homomorphic pseudorandom function.
We apply our strongly secure UT scheme to construct strongly secure threshold signatures and threshold
public-key encryption.
Daniel J. Bernstein, Jolijn Cottaar, Emanuele Di Giandomenico, Kathrin Hövelmanns, Andreas Hülsing, Mikhail Kudinov, Tanja Lange, Mairon Mahzoun, Matthias Meijers, Alex Pellegrini, Alberto Ravagn ...
ePrint Report
This report covers our analysis (security, proofs, efficiency) of the Round-2 candidates to the Korean post-quantum competiton KpqC. Signature systems covered in the report are AIMer, HAETAE, MQ-Sign, and NCC-Sign; KEMs covered are NTRU+, Paloma, REDOG, and SMAUG-T.
26 December 2024
Michael Klooß, Michael Reichle
ePrint Report
Blind signatures are an important primitive for privacy-preserving technologies. To date, highly efficient pairing-free constructions rely on the random oracle model, and additionally, a strong assumption, such as interactive assumptions or the algebraic group model.
In contrast, for signatures we know many efficient constructions that rely on the random oracle model and standard assumptions. In this work, we develop techniques to close this gap. Compared to the most efficient pairing-free AGM-based blind signature by Crites et. al. (Crypto 2023), our construction has a relative overhead of only a factor $3\times$ and $2\times$ in terms of communication and signature size, and it is provable in the random oracle model under the DDH assumption. With one additional move and $\mathbb{Z}_p$ element, we also achieve one-more strong unforgeability.
Our construction is inspired by the recent works by Chairattana-Apirom, Tessaro, and Zhu (Crypto 2024) and Klooß, Reichle, and Wagner (Asiacrypt 2024), and we develop a tailored technique to circumvent the sources of inefficiency in their constructions. Concretely, we achieve signature and communication size of $192$ B and $608$ B, respectively.
In contrast, for signatures we know many efficient constructions that rely on the random oracle model and standard assumptions. In this work, we develop techniques to close this gap. Compared to the most efficient pairing-free AGM-based blind signature by Crites et. al. (Crypto 2023), our construction has a relative overhead of only a factor $3\times$ and $2\times$ in terms of communication and signature size, and it is provable in the random oracle model under the DDH assumption. With one additional move and $\mathbb{Z}_p$ element, we also achieve one-more strong unforgeability.
Our construction is inspired by the recent works by Chairattana-Apirom, Tessaro, and Zhu (Crypto 2024) and Klooß, Reichle, and Wagner (Asiacrypt 2024), and we develop a tailored technique to circumvent the sources of inefficiency in their constructions. Concretely, we achieve signature and communication size of $192$ B and $608$ B, respectively.
Nicholas Brandt, Dennis Hofheinz, Michael Klooß, Michael Reichle
ePrint Report
We construct the first blind signature scheme that achieves all of the following properties simultaneously:
- it is tightly secure under a standard (i.e., non-interactive,
non-\(q\)-type) computational assumption,
- it does not require pairings,
- it does not rely on generic, non-black-box techniques (like generic NIZK
proofs).
The third property enables a reasonably efficient solution, and in fact signatures in our scheme comprise 10 group elements and 29 \(\mathbb{Z}_p\)-elements.
Our scheme starts from a pairing-based non-blind signature scheme (Abe et al., JoC 2023), and uses recent techniques of Chairattana-Apirom, Tessaro, and Zhu (CRYPTO 2024) to replace the pairings used in this scheme with non-interactive zero-knowledge proofs in the random oracle model. This conversion is not generic or straightforward (also because the mentioned previous works have converted only significantly simpler signature schemes), and we are required to improve upon and innovate existing techniques in several places.
As an interesting side note, and unlike previous works, our techniques only require a non-programmable random oracle, and our signature scheme achieves predicate blindness (which means that the user can prove statements about the signed message during the signing process).
Our scheme starts from a pairing-based non-blind signature scheme (Abe et al., JoC 2023), and uses recent techniques of Chairattana-Apirom, Tessaro, and Zhu (CRYPTO 2024) to replace the pairings used in this scheme with non-interactive zero-knowledge proofs in the random oracle model. This conversion is not generic or straightforward (also because the mentioned previous works have converted only significantly simpler signature schemes), and we are required to improve upon and innovate existing techniques in several places.
As an interesting side note, and unlike previous works, our techniques only require a non-programmable random oracle, and our signature scheme achieves predicate blindness (which means that the user can prove statements about the signed message during the signing process).
Samuel Lavery
ePrint Report
We present a novel digital signature scheme grounded in non-commutative cryptography and implemented over a bilinear matrix group platform. At the core of our design is a unique equivocation function that obfuscates intermediate elements, effectively concealing outputs and minimizing observable information leakage. To the best of our knowledge, this is the first digital signature scheme to combine information-theoretic security with computational hardness, relying on a challenging instance of the Non-Abelian Hidden Subgroup Problem (NAHSP) and strengthened by practical guarantees. This dual-layered security approach ensures robustness against both classical and quantum adversaries while maintaining communication overheads competitive with RSA. Our work represents a significant advancement toward efficient, quantum-resilient digital signatures for real-world applications. This paper is an early pre-release intended to invite collaboration and feedback. The work is presented for research purposes only and is not intended for use in production systems.
Yuval Ishai, Hanjun Li, Huijia Lin
ePrint Report
A garbling scheme transforms a program (e.g., circuit) $C$ into a garbled program $\hat{C}$, along with a pair of short keys $(k_{i,0},k_{i,1})$ for each input bit $x_i$, such that $(C,\hat{C}, \{k_{i,x_i}\})$ can be used to recover the output $z = C(x)$ while revealing nothing else about the input $x$. This can be naturally generalized to partial garbling, where part of the input is public, and a computation $z = C(x, y)$ is decomposed into a public part $C_{\text{pub}}(x)$, depending only on the public input $x$, and a private part $z = C_{\text{priv}}(C_{\text{pub}}(x), y)$ that also involves a private input $y$.
A key challenge in garbling is to achieve succinctness, where the size of the garbled program may grow only with the security parameter and (possibly) the output length, but not with the size of $C$. Prior work achieved this strong notion of succinctness using heavy tools such as indistinguishability obfuscation (iO) or a combination of fully homomorphic encryption and attribute-based encryption.
In this work, we introduce new succinct garbling schemes based on variants of standard group-based assumptions. Our approach, being different from prior methods, offers a promising pathway towards practical succinct garbling. Specifically, we construct: - A succinct partial garbling scheme for general circuits, where the garbled circuit size scales linearly with the private computation $|C_{\text{priv}}|$ and is independent of the public computation $|C_{\text{pub}}|$. This implies fully succinct conditional disclosure of secrets (CDS) protocols for circuits. - Succinct (fully hiding) garbling schemes for simple types of programs, including truth tables, bounded-length branching programs (capturing decision trees and DFAs as special cases) and degree-2 polynomials, where the garbled program size is independent of the program size. This implies succinct private simultaneous messages (PSM) protocols for the same programs.
Our succinct partial garbling scheme can be based on a circular-security variant of the power-DDH assumption, which holds in the generic group model, or alternatively on the key-dependent message security of the Damgård-Jurik encryption. For bounded-depth circuits or the aforementioned simple programs, we avoid circular-security assumptions entirely.
At the heart of our technical approach is a new computational flavor of algebraic homomorphic MAC (aHMAC), for which we obtain group-based constructions building on techniques from the literature on homomorphic secret sharing. Beyond succinct garbling, we demonstrate the utility of aHMAC by constructing constrained pseudorandom functions (CPRFs) for general constraint circuits from group-based assumptions. Previous CPRF constructions were limited to $\mathsf{NC}^1$ circuits or alternatively relied on lattices or iO.
A key challenge in garbling is to achieve succinctness, where the size of the garbled program may grow only with the security parameter and (possibly) the output length, but not with the size of $C$. Prior work achieved this strong notion of succinctness using heavy tools such as indistinguishability obfuscation (iO) or a combination of fully homomorphic encryption and attribute-based encryption.
In this work, we introduce new succinct garbling schemes based on variants of standard group-based assumptions. Our approach, being different from prior methods, offers a promising pathway towards practical succinct garbling. Specifically, we construct: - A succinct partial garbling scheme for general circuits, where the garbled circuit size scales linearly with the private computation $|C_{\text{priv}}|$ and is independent of the public computation $|C_{\text{pub}}|$. This implies fully succinct conditional disclosure of secrets (CDS) protocols for circuits. - Succinct (fully hiding) garbling schemes for simple types of programs, including truth tables, bounded-length branching programs (capturing decision trees and DFAs as special cases) and degree-2 polynomials, where the garbled program size is independent of the program size. This implies succinct private simultaneous messages (PSM) protocols for the same programs.
Our succinct partial garbling scheme can be based on a circular-security variant of the power-DDH assumption, which holds in the generic group model, or alternatively on the key-dependent message security of the Damgård-Jurik encryption. For bounded-depth circuits or the aforementioned simple programs, we avoid circular-security assumptions entirely.
At the heart of our technical approach is a new computational flavor of algebraic homomorphic MAC (aHMAC), for which we obtain group-based constructions building on techniques from the literature on homomorphic secret sharing. Beyond succinct garbling, we demonstrate the utility of aHMAC by constructing constrained pseudorandom functions (CPRFs) for general constraint circuits from group-based assumptions. Previous CPRF constructions were limited to $\mathsf{NC}^1$ circuits or alternatively relied on lattices or iO.
ChihYun Chuang, IHung Hsu, TingFang Lee
ePrint Report
RSA is widely used in modern cryptographic practice, with certain RSA-based protocols relying on the secrecy of $p$ and $q$. A common approach is to use secure multiparty computation to address the privacy concerns of
$p$ and $q$.
Specifically constrained to distributed RSA modulus generation protocols, the biprimality test for Blum integers $N=pq$, where $p\equiv q\equiv 3 \mod 4$ are two primes, proposed by Boneh and Franklin in $2001$ is the most commonly used. Over the past $20 $ years, the worst-case acceptance rate of this test has been consistently assumed to be $1/2$ under the condition $\gcd(pq,p+q-1)=1$.
In this paper, we show that for the Boneh-Franklin test, its acceptance probability is at most $1/4$ rather than $1/2$,
except in the case where $p = q = 3$. At the same time, $1/4$ is also the tightest upper bound. Enhance the effectiveness of applying the Boneh-Franklin test in this result: achieving the same soundness for the RSA modulus requires only half the number of iterations commonly recognized.
Furthermore, we propose two types of Lucas biprimality tests. In the worst case, one of proposed tests acceptance rate is at most $1/4 + 1.25/(p_{\min} -3)$, where $p_{\min}$ is the smallest prime factors of $N$. However, simulation study suggests that this test is generally more efficient than the Boneh-Franklin test for detecting when $N$ is not an RSA modulus.
The second type of Lucas test, though less efficient, can be applied to arbitrary RSA moduli $p$ and $q$. Nevertheless, if the soundness error for generating an RSA modulus is set at approximately $2^{-80}$, it leaks at most $46$ quadratic symbol values, regardless of the public key length. We also design corresponding protocols for both tests and validate their resilience against semi-honest adversaries, which can be applied to most known distributed RSA modulus generation protocols. After thoroughly analyzing and comparing well-known protocols, including the variant Miller-Rabin test used by Burkhardt et al. (CCS 2023), the Boneh-Franklin test, and our proposed Lucas-type tests, our proposed Lucas test is also highly competitive in verifying whether $N$ is an RSA modulus.
Alexander Bienstock, Daniel Escudero, Antigoni Polychroniadou
ePrint Report
The \emph{Fluid} multiparty computation (MPC) model, introduced in (Choudhuri \emph{et al.} CRYPTO 2021), addresses dynamic scenarios where participants can join or leave computations between rounds. Communication complexity initially stood at $\Omega(n^2)$ elements per gate, where $n$ is the number of parties in a committee online at a time. This held for both statistical security (honest majority) and computational security (dishonest majority) in (Choudhuri \emph{et al.}~CRYPTO'21) and (Rachuri and Scholl, CRYPTO'22), respectively. The work of Bienstock \emph{et al.}~CRYPTO'23) improved communication to $O(n)$ elements per gate. However, it's important to note that the perfectly secure setting with one-third corruptions per committee has only recently been addressed in the work of (David \emph{et al.}~CRYPTO'23). Notably, their contribution marked a significant advancement in the Fluid MPC literature by introducing guaranteed output delivery. However, this achievement comes at the cost of prohibitively expensive communication, which scales to $\Omega(n^9)$ elements per gate.
In this work, we study the realm of perfectly secure Fluid MPC under one-third active corruptions. Our primary focus lies in proposing efficient protocols that embrace the concept of security with abort. Towards this, we design a protocol for perfectly secure Fluid MPC that requires only \emph{linear} communication of $O(n)$ elements per gate, matching the communication of the non-Fluid setting. Our results show that, as in the case of computational and statistical security, perfect security with abort for Fluid MPC comes "for free (asymptotically linear in $n$) with respect to traditional non-Fluid MPC, marking a substantial leap forward in large scale dynamic computations, such as Fluid MPC.
In this work, we study the realm of perfectly secure Fluid MPC under one-third active corruptions. Our primary focus lies in proposing efficient protocols that embrace the concept of security with abort. Towards this, we design a protocol for perfectly secure Fluid MPC that requires only \emph{linear} communication of $O(n)$ elements per gate, matching the communication of the non-Fluid setting. Our results show that, as in the case of computational and statistical security, perfect security with abort for Fluid MPC comes "for free (asymptotically linear in $n$) with respect to traditional non-Fluid MPC, marking a substantial leap forward in large scale dynamic computations, such as Fluid MPC.
24 December 2024
Giuseppe D'Alconzo, Andre Esser, Andrea Gangemi, Carlo Sanna
ePrint Report
A partial key exposure attack is a key recovery attack where an adversary obtains a priori partial knowledge of the secret key, e.g., through side-channel leakage. While for a long time post-quantum cryptosystems, unlike RSA, have been believed to be resistant to such attacks, recent results by Esser, May, Verbel, and Wen (CRYPTO ’22), and by Kirshanova and May (SCN ’22), have refuted this belief.
In this work, we focus on partial key exposure attacks in the context of rank-metric-based schemes, particularly targeting the RYDE, MIRA, and MiRitH digital signatures schemes, which are active candidates in the NIST post-quantum cryptography standardization process. We demonstrate that, similar to the RSA case, the secret key in RYDE can be recovered from a constant fraction of its bits. Specifically, for NIST category I parameters, our attacks remain efficient even when less than 25% of the key material is leaked. Interestingly, our attacks lead to a natural improvement of the best generic attack on RYDE without partial knowledge, reducing security levels by up to 9 bits. For MIRA and MiRitH our attacks remain efficient as long as roughly 57%-60% of the secret key material is leaked.
Additionally, we initiate the study of partial exposure of the witness in constructions following the popular MPCitH (MPC-in-the-Head) paradigm. We show a generic reduction from recovering RYDE and MIRA’s witness to the MinRank problem, which again leads to efficient key recovery from constant fractions of the secret witness in both cases.
In this work, we focus on partial key exposure attacks in the context of rank-metric-based schemes, particularly targeting the RYDE, MIRA, and MiRitH digital signatures schemes, which are active candidates in the NIST post-quantum cryptography standardization process. We demonstrate that, similar to the RSA case, the secret key in RYDE can be recovered from a constant fraction of its bits. Specifically, for NIST category I parameters, our attacks remain efficient even when less than 25% of the key material is leaked. Interestingly, our attacks lead to a natural improvement of the best generic attack on RYDE without partial knowledge, reducing security levels by up to 9 bits. For MIRA and MiRitH our attacks remain efficient as long as roughly 57%-60% of the secret key material is leaked.
Additionally, we initiate the study of partial exposure of the witness in constructions following the popular MPCitH (MPC-in-the-Head) paradigm. We show a generic reduction from recovering RYDE and MIRA’s witness to the MinRank problem, which again leads to efficient key recovery from constant fractions of the secret witness in both cases.
Wenquan Zhou, An Wang, Yaoling Ding, Congming Wei, Jingqi Zhang, Liehuang Zhu
ePrint Report
Side-channel analysis is a powerful technique to extract secret data from cryptographic devices. However, this task heavily relies on experts and specialized tools, particularly in the case of simple power analysis (SPA). Meanwhile, ChatGPT, a leading example of large language models, has attracted great attention and been widely applied for assisting users with complex tasks. Despite this, ChatGPT’s capabilities for fully automated SPA, where prompts and traces are input only once, have yet to be systematically explored and improved. In this paper, we introduce a novel prompt template with three expert strategies and conduct a large-scale evaluation of ChatGPT’s capabilities for SPA. We establish a dataset comprising seven sets of real power traces from various implementations of public-key cryptosystems, including RSA, ECC, and Kyber, as well as eighteen sets of simulated power traces that illustrate typical SPA leakage patterns. The results indicate that ChatGPT fails to be directly used for SPA. However, by applying the expert strategies, we successfully recovered the private keys for all twenty-five traces, which demonstrate that non-experts can use ChatGPT with our expert strategies to perform fully automated SPA.
Deepak Kumar Dalai, Krishna Mallick, Pierrick Méaux
ePrint Report
The construction of Boolean functions with good cryptographic properties over subsets of vectors with fixed Hamming weight is significant for lightweight stream ciphers like FLIP. In this article, we propose a general method to construct a class of Weightwise Almost Perfectly Balanced (WAPB) Boolean functions using the action of a cyclic permutation group on $\mathbb{F}_2^n$. This class generalizes the Weightwise Perfectly Balanced (WPB) $2^m$-variable Boolean function construction by Liu and Mesnager to any $n$. We show how to bound the nonlinearity and weightwise nonlinearities of functions from this construction. Additionally, we explore two significant permutation groups, $\langle \psi \rangle$ and $\langle \sigma \rangle$, where $\psi$ is a binary-cycle permutation and $\sigma$ is a rotation. We theoretically analyze the cryptographic properties of the WAPB functions derived from these permutations and experimentally evaluate their nonlinearity parameters for $n$ between 4 and 10.