20 February 2025
Sabyasachi Dey, Subhamoy Maitra, Santanu Sarkar, Nitin Kumar Sharma
Over the past decade and a half, cryptanalytic techniques for Salsa20 have been increasingly refined, largely following the overarching concept of Probabilistically Neutral Bits (PNBs) by Aumasson et al. (FSE 2008). In this paper, we present a novel criterion for choosing key-$\mathcal{IV}$ pairs using certain 2-round criteria and connect that with clever tweaks of existing techniques related to Probabilistically Independent $\mathcal{IV}$ bits (earlier used for ARX ciphers, but not for Salsa20) and well-studied PNBs. Through a detailed examination of the matrix after initial rounds of Salsa20, we introduce the first-ever cryptanalysis of Salsa20 exceeding $8$ rounds. Specifically, Salsa20/$8.5$, consisting of $256$ secret key bits, can be cryptanalyzed with a time complexity of $2^{245.84}$ and data amounting to $2^{99.47}$. Further, the sharpness of our attack can be highlighted by showing that Salsa20/$8$ can be broken with time $2^{186.01}$ and data $2^{99.73}$, which is a significant improvement over the best-known result of Coutinho et al. (Journal of Cryptology, 2023, time $2^{217.14}$ and data $2^{113.14}$). Here, the refinements related to backward biases for PNBs are also instrumental in achieving the improvements. We also provide certain instances of how these ideas improve the cryptanalysis on $128$-bit versions. In the process, a few critical points are raised on some existing state-of-the-art works in this direction, and in those cases, their estimates of time and data are revisited to note the correct complexities, revising the incorrect numbers.
David Gerault, Anna Hambitzer, Eyal Ronen, Adi Shamir
The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or to hide a secure watermark in the output). The problem is that cryptographic primitives are typically designed to run on digital computers that use Boolean gates to map sequences of bits to sequences of bits, whereas DNNs are a special type of analog computer that uses linear mappings and ReLUs to map vectors of real numbers to vectors of real numbers. This discrepancy between the discrete and continuous computational models raises the question of what is the best way to implement standard cryptographic primitives as DNNs, and whether DNN implementations of secure cryptosystems remain secure in the new setting, in which an attacker can ask the DNN to process a message whose "bits" are arbitrary real numbers.
In this paper we lay the foundations of this new theory, defining the meaning of correctness and security for implementations of cryptographic primitives as ReLU-based DNNs. We then show that the natural implementations of block ciphers as DNNs can be broken in linear time by using such nonstandard inputs. We tested our attack in the case of full round AES-128, and had $100\%$ success rate in finding $1000$ randomly chosen keys. Finally, we develop a new method for implementing any desired cryptographic functionality as a standard ReLU-based DNN in a provably secure and correct way. Our protective technique has very low overhead (a constant number of additional layers and a linear number of additional neurons), and is completely practical.
Clémence Chevignard, Guilhem Mureau, Thomas Espitau, Alice Pellet-Mary, Heorhii Pliatsok, Alexandre Wallet
In this article we present a non-uniform reduction from rank-
2 module-LIP over Complex Multiplication fields, to a variant of the
Principal Ideal Problem, in some fitting quaternion algebra. This reduction
is classical deterministic polynomial-time in the size of the inputs. The
quaternion algebra in which we need to solve the variant of the principal
ideal problem depends on the parameters of the module-LIP problem,
but not on the problem’s instance. Our reduction requires the knowledge
of some special elements of this quaternion algebras, which is why it is
non-uniform.
In some particular cases, these elements can be computed in polynomial
time, making the reduction uniform. This is the case for the Hawk
signature scheme: we show that breaking Hawk is no harder than solving
a variant of the principal ideal problem in a fixed quaternion algebra
(and this reduction is uniform).
Ignacio Cascudo, Anamaria Costache, Daniele Cozzo, Dario Fiore, Antonio Guimarães, Eduardo Soria-Vazquez
We address the problem of proving the validity of computation on ciphertexts of homomorphic encryption (HE) schemes, a feature that enables outsourcing of data and computation while ensuring both data privacy and integrity.
We propose a new solution that handles computations in RingLWE-based schemes, particularly the CKKS scheme for approximate arithmetic. Our approach efficiently handles ciphertext arithmetic in the polynomial ring $R_q$ without emulation overhead and manages ciphertexts maintenance operations, such as modulus switching, key switching, and rescaling, with small cost.
Our main result is a succinct argument that efficiently handles arithmetic computations and range checks over the ring $R_q$. To build this argument system, we construct new polynomial interactive oracle proofs (PIOPs) and multilinear polynomial commitments supporting polynomials over $R_q$, unlike prior work which focused on finite fields. We validate the concrete complexity of our approach through implementation and experimentation. Compared to the current state-of-the-art on verifiable HE for RNS schemes, we present similar performance for small circuits while being able to efficiently scale to larger ones, which was a major challenge for previous constructions as it requires verifying procedures such as relinearization.
Mohammed Barhoush, Ryo Nishimaki, Takashi Yamakawa
We investigate two natural relaxations of quantum cryptographic assumptions. First, we examine primitives such as pseudorandom generators ($\text{PRG}$s) and pseudorandom states ($\text{PRS}$s), extended with quantum input sampling, which we term $\text{PRG}^{qs}$ and $\text{PRS}^{qs}$. In these primitives, the input is sampled via a quantum algorithm rather than uniformly at random. The second relaxation, $\bot$-pseudodeterminism, allows the generator to output $\bot$ on an inverse-polynomial fraction of inputs.
We demonstrate an equivalence between (bounded-query) logarithmic-sized $\text{PRS}^{qs}$, logarithmic-sized $\text{PRS}^{qs}$, and $\text{PRG}^{qs}$. Notably, such an equivalence remains unknown for the uniform key sampling versions of these primitives. Furthermore, we establish that $\text{PRG}^{qs}$ can be constructed from $\bot$-pseudodeterministic $\text{PRG}$s ($\bot\text{-PRG}$s).
To further justify our exploration, we present two separation results. First, we examine the relationship between $\bot$-pseudodeterministic notions and their deterministic counterparts. We show that there does not exist a black-box construction of a one-way state generator $(\text{OWSG})$ from a $\bot\text{-PRG}$, indicating that $\bot$-pseudodeterministic primitives may be inherently weaker than their deterministic counterparts. Second, we explore the distinction between quantum and uniform input sampling. We prove that there does not exist a black-box construction of a $\bot$-psuedodeterministic $\text{OWSG}$ from a $\text{PRF}^{qs}$, suggesting that primitives relying on quantum input sampling may be weaker than those using traditional uniform sampling. Given the broad cryptographic applicability of $\text{PRF}^{qs}$s and $\bot\text{-PRG}$s, these separation results yield numerous new insights into the hierarchy of primitives within MicroCrypt.
We demonstrate an equivalence between (bounded-query) logarithmic-sized $\text{PRS}^{qs}$, logarithmic-sized $\text{PRS}^{qs}$, and $\text{PRG}^{qs}$. Notably, such an equivalence remains unknown for the uniform key sampling versions of these primitives. Furthermore, we establish that $\text{PRG}^{qs}$ can be constructed from $\bot$-pseudodeterministic $\text{PRG}$s ($\bot\text{-PRG}$s).
To further justify our exploration, we present two separation results. First, we examine the relationship between $\bot$-pseudodeterministic notions and their deterministic counterparts. We show that there does not exist a black-box construction of a one-way state generator $(\text{OWSG})$ from a $\bot\text{-PRG}$, indicating that $\bot$-pseudodeterministic primitives may be inherently weaker than their deterministic counterparts. Second, we explore the distinction between quantum and uniform input sampling. We prove that there does not exist a black-box construction of a $\bot$-psuedodeterministic $\text{OWSG}$ from a $\text{PRF}^{qs}$, suggesting that primitives relying on quantum input sampling may be weaker than those using traditional uniform sampling. Given the broad cryptographic applicability of $\text{PRF}^{qs}$s and $\bot\text{-PRG}$s, these separation results yield numerous new insights into the hierarchy of primitives within MicroCrypt.
Ali Dogan, Sermin Kocaman
Decentralized Autonomous Organization operates without a central entity, being owned and governed collectively by its members. In this organization, decisions are carried out automatically through smart contracts for routine tasks, while members vote for unforeseen issues. Scalability in decision-making through voting on proposals is essential to accommodate a growing number of members without sacrificing security. This paper addresses this challenge by introducing a scalable and secure DAO voting system that ensures security through Groth16 zk-SNARKs and exponential ElGamal encryption algorithm while achieving scalability by verifiably delegating heavy computations to untrusted entities. While offline computation on the exponential ElGamal homomorphic encryption algorithm is enabled to reduce the computational cost of the blockchain, Groth16 is allowed to maintain robust off-chain calculation without revealing any further details. Specifically, the Groth16 proof guarantees that (i) the encrypted votes accurately reflect the voter's voting power, ensuring no unauthorized weight manipulation; (ii) only valid non-negative vote values are encrypted, preventing unintended or malicious vote tampering; and (iii) the homomorphic summation is performed correctly. The implementation shows that the proofs are verified remarkably fast, making the S2DV protocol highly suitable for scalable DAO voting, while preserving the security of the election.
Yifan Song, Xiaxi Ye
In this work, we consider the communication complexity of MPC protocols in honest majority setting achieving malicious security in both information-theoretic setting and computational setting. On the one hand, we study the possibility of basing honest majority MPC protocols on oblivious linear evaluation (OLE)-hybrid model efficiently with information-theoretic security. More precisely, we instantiate preprocessing phase of the recent work Sharing Transformation (Goyal, Polychroniadou, and Song, CRYPTO 2022) assuming random OLE correlations. Notably, we are able to prepare packed Beaver triples with malicious security achieving amortized communication of $O(n)$ field elements plus a number of $O(n)$ OLE correlations per packed Beaver triple, which is the best known result. To further efficiently prepare random OLE correlations, we resort to IKNP-style OT extension protocols (Ishai et al., CRYPTO 2003) in random oracle model.
On the other hand, we derive a communication lower bound for preparing OLE correlations in the information-theoretic setting based on negative results due to Damgård, Larsen, and Nielsen (CRYPTO 2019).
Combining our positive result with the work of Goyal, Polychroniadou, and Song (CRYPTO 2022), we derive an MPC protocol with amortized communication of $O(\ell+\kappa)$ elements per gate in random oracle model achieving malicious security, where $\ell$ denotes the length of a field element and $\kappa$ is the security parameter.
On the other hand, we derive a communication lower bound for preparing OLE correlations in the information-theoretic setting based on negative results due to Damgård, Larsen, and Nielsen (CRYPTO 2019).
Combining our positive result with the work of Goyal, Polychroniadou, and Song (CRYPTO 2022), we derive an MPC protocol with amortized communication of $O(\ell+\kappa)$ elements per gate in random oracle model achieving malicious security, where $\ell$ denotes the length of a field element and $\kappa$ is the security parameter.
19 February 2025
Universitat Politècnica de Catalunya (UPC)- BarcelonaTECH
Several faculty positions within the Cybersecurity domain are available in Universitat Politecnica de Catalunya - BarcelonaTech (UPC), School of Engineering in Manresa. UPC is a leading research university in Spain with more than 5000 students studying on Information and Communication Technology related fields. It is the top Spanish university in terms of participation in EU-funded research projects.
Apart from conducting research within their area, the successful candidates are expected to teach the recent MSc program on Cybersecurity, AI, and IoT areas, funded through the European Commission’s Digital Europe programme. The development of the MSc program will be part of a European project, in which the candidates will participate actively.
Skills/Qualifications:- PhD degree in the area of Computer Science, Computer Engineering, or a related field
- Written and spoken proficiency in English (Spanish or Catalan is a plus)
- Proven scientific publication record
- Experience in academic teaching (classes and lab courses) and supervising or co-supervising of academic theses
- A friendly working environment
- Hybrid work modality with up to 40% home office option
- Wide range of internal and external training opportunities, various career options
- Nice climate
Closing date for applications:
Contact: Ilker Demirkol (ilker.demirkol@upc.edu)
Jules Baudrin, Sonia Belaïd, Nicolas Bon, Christina Boura, Anne Canteaut, Gaëtan Leurent, Pascal Paillier, Léo Perrin, Matthieu Rivain, Yann Rotella, Samuel Tap
Fully Homomorphic Encryption (FHE) allows computations on encrypted data without requiring decryption, ensuring data privacy during processing. However, FHE introduces a significant expansion of ciphertext sizes compared to plaintexts, which results in higher communication. A practical solution to mitigate this issue is transciphering, where only the master key is homomorphically encrypted, while the actual data is encrypted using a symmetric cipher, usually a stream cipher. The server then homomorphically evaluates the stream cipher to convert the encrypted data into a homomorphically encrypted form.
We introduce Transistor, a stream cipher specifically designed for efficient homomorphic evaluation within the TFHE scheme, a widely-used FHE framework known for its fast bootstrapping and ability to handle low-precision data. Transistor operates on $\mathbb{F}_{17}$ which is chosen to optimize TFHE performances. Its components are carefully engineered to both control noise growth and provide strong security guarantees. First, a simple TFHE-friendly implementation technique for LFSRs allows us to use such components to cheaply increase the state size. At the same time, a small Finite State Machine is the only part of the state updated non-linearly, each non-linear operation corresponding in TFHE to a rather expensive Programmable Bootstrapping. This update is done using an AES-round-like transformation. But, in contrast to other stream ciphers like SNOW or LEX, our construction comes with information-theoretic security arguments proving that an attacker cannot obtain any information about the secret key from three or fewer consecutive keystream outputs. These information-theoretic arguments are then combined with a thorough analysis of potential correlations to bound the minimal keystream length required for recovering the secret key.
Our implementation of Transistor significantly outperforms the state of the art of TFHE transciphering, achieving a throughput of over 60 bits/s on a standard CPU, all while avoiding the need for an expensive initialization process.
We introduce Transistor, a stream cipher specifically designed for efficient homomorphic evaluation within the TFHE scheme, a widely-used FHE framework known for its fast bootstrapping and ability to handle low-precision data. Transistor operates on $\mathbb{F}_{17}$ which is chosen to optimize TFHE performances. Its components are carefully engineered to both control noise growth and provide strong security guarantees. First, a simple TFHE-friendly implementation technique for LFSRs allows us to use such components to cheaply increase the state size. At the same time, a small Finite State Machine is the only part of the state updated non-linearly, each non-linear operation corresponding in TFHE to a rather expensive Programmable Bootstrapping. This update is done using an AES-round-like transformation. But, in contrast to other stream ciphers like SNOW or LEX, our construction comes with information-theoretic security arguments proving that an attacker cannot obtain any information about the secret key from three or fewer consecutive keystream outputs. These information-theoretic arguments are then combined with a thorough analysis of potential correlations to bound the minimal keystream length required for recovering the secret key.
Our implementation of Transistor significantly outperforms the state of the art of TFHE transciphering, achieving a throughput of over 60 bits/s on a standard CPU, all while avoiding the need for an expensive initialization process.
Anasuya Acharya, Karen Azari, Mirza Ahad Baig, Dennis Hofheinz, Chethan Kamath
Garbling is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since the first construction by Yao (FOCS’82, ’86), a line of work has concerned itself with reducing the communication and computational complexity of that construction. One of the most efficient garbling schemes presently is the ‘Half Gates’ scheme by Zahur, Rosulek, and Evans (Eurocrypt’15). Despite its widespread adoption, the provable security of this scheme has been based on assumptions whose only instantiations are in idealized models. For example, in their original paper, Zahur, Rosulek, and Evans showed that hash functions satisfying a notion called circular correlation robustness (CCR) suffice for this task, and then proved that CCR secure hash functions can be instantiated in the random permutation model. In this work, we show how to securely instantiate the Half Gates scheme in the standard model. To this end, we first show how this scheme can be securely instantiated given a (family of) weak CCR hash function, a notion that we introduce. Furthermore, we show how a weak CCR hash function can be used to securely instantiate other efficient garbling schemes, namely the ones by Rosulek and Roy (Crypto’21) and Heath (Eurocrypt’24). Thus we believe this notion to be of independent interest. Finally, we construct such weak CCR hash functions using indistinguishability obfuscation and one-way functions. The security proof of this construction constitutes our main technical contribution. While our construction is not practical, it serves as a proof of concept supporting the soundness of these garbling schemes, which we regard to be particularly important given the recent initiative by NIST to standardize garbling, and the optimizations in Half Gates being potentially adopted.
Bill Allombert, Alice Pellet-Mary, Wessel van Woerden
The rank-$2$ module-LIP problem was introduced in cryptography by (Ducas, Postlethwaite, Pulles, van Woerden, Asiacrypt 2022), to construct the highly performant HAWK scheme. A first cryptanalytic work by (Mureau, Pellet--Mary, Pliatsok, Wallet, Eurocrypt 2024) showed a heuristic polynomial time attack against the rank-$2$ module-LIP problem over totally real number fields. While mathematically interesting, this attack focuses on number fields that are not relevant for cryptography. The main families of fields used in cryptography are the highly predominant cyclotomic fields (used for instance in the HAWK scheme), as well as the NTRU Prime fields, used for instance in the eponymous NTRU Prime scheme (Bernstein, Chuengsatiansup, Lange, van Vredendaal, SAC 2017).
In this work, we generalize the attack of Mureau et al. against rank-$2$ module-LIP to the family of all number fields with at least one real embedding, which contains the NTRU Prime fields. We present three variants of our attack, firstly a heuristic one that runs in quantum polynomial time. Secondly, under the extra assumption that the defining polynomial of $K$ has a $2$-transitive Galois group (which is the case for the NTRU Prime fields), we give a provable attack that runs in quantum polynomial time. And thirdly, with the same $2$-transitivity assumption we give a heuristic attack that runs in classical polynomial time. For the latter we use a generalization of the Gentry--Szydlo algorithm to any number field which might be of independent interest.
In this work, we generalize the attack of Mureau et al. against rank-$2$ module-LIP to the family of all number fields with at least one real embedding, which contains the NTRU Prime fields. We present three variants of our attack, firstly a heuristic one that runs in quantum polynomial time. Secondly, under the extra assumption that the defining polynomial of $K$ has a $2$-transitive Galois group (which is the case for the NTRU Prime fields), we give a provable attack that runs in quantum polynomial time. And thirdly, with the same $2$-transitivity assumption we give a heuristic attack that runs in classical polynomial time. For the latter we use a generalization of the Gentry--Szydlo algorithm to any number field which might be of independent interest.
Dan Boneh, Benedikt Bünz, Kartik Nayak, Lior Rotem, Victor Shoup
We initiate the study of high-threshold public-key decryption, along with an enhanced security feature called context-dependent decryption.
Our study includes definitions, constructions, security proofs, and applications.
The notion of high-threshold decryption has received almost no attention in the literature. The enhanced security feature of context-dependent encryption is entirely new, and plays an important role in many natural applications of threshold decryption.
Sonia Belaïd, Matthieu Rivain, Mélissa Rossi
The random probing model formalizes a leakage scenario where each wire in a circuit leaks with probability $p$. This model holds practical relevance due to its reduction to the noisy leakage model, which is widely regarded as the appropriate formalization for power and electromagnetic side-channel attacks.
In this paper, we present new techniques for designing efficient masking schemes that achieve tighter random probing security with lower complexity. First, we introduce the notion of \emph{cardinal random probing composability} (Cardinal-RPC), offering a new trade-off between complexity and security for composing masking gadgets. Next, we propose a novel refresh technique based on a simple iterative process: randomly selecting and updating two shares with fresh randomness. While not perfectly secure in the standard probing model, this method achieves arbitrary cardinal-RPC security, making it a versatile tool for constructing random-probing secure circuits. Using this refresh, we develop additional basic gadgets (e.g., linear multiplication, addition, and copy) that satisfy the cardinal-RPC notion. Despite the increased complexity, the gains in security significantly outweigh the overhead, with the number of iterations offering useful flexibility.
To showcase our techniques, we apply them to lattice-based signatures. Specifically, we introduce a new random-probing composable gadget for sampling small noise, a key component in various post-quantum algorithms. To assess security in this context, we generalize the random probing security model to address auxiliary inputs and public outputs. We apply our findings to Raccoon, a masking-friendly signature scheme originally designed for standard probing security. We prove the secure composition of our new gadgets for key generation and signature computation, and show that our masking scheme achieves a superior security-performance tradeoff compared to previous approaches based on random probing expansion. To our knowledge, this is the first fully secure instantiation of a post-quantum algorithm in the random probing model.
In this paper, we present new techniques for designing efficient masking schemes that achieve tighter random probing security with lower complexity. First, we introduce the notion of \emph{cardinal random probing composability} (Cardinal-RPC), offering a new trade-off between complexity and security for composing masking gadgets. Next, we propose a novel refresh technique based on a simple iterative process: randomly selecting and updating two shares with fresh randomness. While not perfectly secure in the standard probing model, this method achieves arbitrary cardinal-RPC security, making it a versatile tool for constructing random-probing secure circuits. Using this refresh, we develop additional basic gadgets (e.g., linear multiplication, addition, and copy) that satisfy the cardinal-RPC notion. Despite the increased complexity, the gains in security significantly outweigh the overhead, with the number of iterations offering useful flexibility.
To showcase our techniques, we apply them to lattice-based signatures. Specifically, we introduce a new random-probing composable gadget for sampling small noise, a key component in various post-quantum algorithms. To assess security in this context, we generalize the random probing security model to address auxiliary inputs and public outputs. We apply our findings to Raccoon, a masking-friendly signature scheme originally designed for standard probing security. We prove the secure composition of our new gadgets for key generation and signature computation, and show that our masking scheme achieves a superior security-performance tradeoff compared to previous approaches based on random probing expansion. To our knowledge, this is the first fully secure instantiation of a post-quantum algorithm in the random probing model.
Sara Montanari, Riccardo Longo, Alessio Meneghetti
The secure management of private keys is a fundamental challenge, particularly for the general public, as losing these keys can result in irreversible asset loss. Traditional custodial approaches pose security risks, while decentralized secret sharing schemes offer a more resilient alternative by distributing trust among multiple parties. In this work, we extend an existing decentralized, verifiable, and extensible cryptographic key recovery scheme based on Shamir's secret sharing. We introduce a refresh phase that ensures proactive security, preventing long-term exposure of secret shares. Our approach explores three distinct methods for refreshing shares, analyzing and comparing their security guarantees and computational complexity. Additionally, we extend the protocol to support more complex access structures, with a particular focus on threshold access trees, enabling fine-grained control over key reconstruction.
18 February 2025
Julius Hermelink, Kai-Chun Ning, Richard Petri
NIST has standardized ML-KEM and ML-DSA as replacements for pre-quantum key exchanges and digital signatures. Both schemes have already seen analysis with respect to side-channels, and first fully masked implementations of ML-DSA have been published. Previous attacks have focused on unprotected implementations or assumed only hiding countermeasures to be in-place. Thus, in contrast to ML-KEM, the threat of side-channel attacks for protected implementations of ML-DSA is mostly unclear.
In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis.
Finally, we carry out relevant attacks on a physical device, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on a physical device validate our simulations.
In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis.
Finally, we carry out relevant attacks on a physical device, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on a physical device validate our simulations.
Nigel P. Smart, Michael Walter
We show that the randomized TFHE bootstrapping technique of Bourse and Izabechéne provides a form of sanitization which is error-simulatable. This means that the randomized bootstrap can be used not only for sanitization of ciphertexts (i.e. to hide the function that has been computed), but that it can also be used in server-assisted threshold decryption. Thus we extend the server-assisted threshold decryption method of Passelégue and Stehlé (ASIACRYPT '24) to FHE schemes which have small ciphertext modulus (such as TFHE). In addition the error-simulatable sanitization enables us to obtain FuncCPA security for TFHE essentially for free.
Veronika Kuchta, Jason T. LeGrow, Edoardo Persichetti
We construct a novel code-based blind signature scheme, us- ing the Matrix Equivalence Digital Signature (MEDS) group action. The scheme is built using similar ideas to the Schnorr blind signature scheme and CSI-Otter, but uses additional public key and commitment informa- tion to overcome the difficulties that the MEDS group action faces: lack of module structure (present in Schnorr), lack of a quadratic twist (present in CSI-Otter), and non-commutativity of the acting group. We address security concerns related to public key validation, and prove the security of our protocol in the random oracle model, using the security framework of Kastner, Loss, and Xu, under a variant of the Inverse Matrix Code Equivalence problem and a mild heuristic assumption.
Vahid Jahandideh, Jan Schoone, Lejla Batina
We present a novel scheme for securely computing the AND operation, without requiring additional online randomness. Building on the work of Nikova et al., our construction extends security beyond the first order while ensuring a uniform output distribution and resilience against glitches up to a specified threshold. This result addresses a longstanding open problem in side-channel-resistant masking schemes.
Our approach is based on a new method of share clustering, inspired by finite affine geometry, enabling simultaneous consideration of both security and uniformity. Furthermore, we demonstrate how this clustering-based framework can be applied to higher-order protection of ciphers like Ascon under a fully deterministic masking regime.
By eliminating the need for online randomness within the protected circuit, our work expands the practical scope of efficient and higher-order masking schemes for resource constraint applications.
Lukas Aumayr, Zeta Avarikioti, Iosif Salem, Stefan Schmid, Michelle Yeo
Blockchain interoperability solutions allow users to hold and transfer assets among different chains, and in so doing reap the benefits of each chain. To fully reap the benefits of multi-chain financial operations, it is paramount to support interoperability and cross-chain transactions also on Layer-2 networks, in particular payment channel networks (PCNs). Nevertheless, existing works on Layer-2 interoperability solutions still involve on-chain events, which limits their scalability and throughput. In this work, we present X-Transfer, the first secure, scalable, and fully off-chain protocol that allows payments across different PCNs. We formalize and prove the security of X-Transfer against rational adversaries with a game theoretic analysis. In order to boost efficiency and scalability, X-Transfer also performs transaction aggregation to increase channel liquidity and transaction throughput while simultaneously minimizing payment routing fees. Our empirical evaluation of X-Transfer shows that X-Transfer achieves at least twice as much throughput compared to the baseline of no transaction aggregation, confirming X-Transfer's efficiency.
Arthur Herlédan Le Merdy, Benjamin Wesolowski
In this paper, we prove that the supersingular isogeny problem (Isogeny), endomorphism ring problem (EndRing) and maximal order problem (MaxOrder) are equivalent under probabilistic polynomial time reductions, unconditionally.
Isogeny-based cryptography is founded on the presumed hardness of these problems, and their interconnection is at the heart of the design and analysis of cryptosystems like the SQIsign digital signature scheme. Previously known reductions relied on unproven assumptions such as the generalized Riemann hypothesis. In this work, we present unconditional reductions, and extend this network of equivalences to the problem of computing the lattice of all isogenies between two supersingular elliptic curves (HomModule).
For cryptographic applications, one requires computational problems to be hard on average for random instances. It is well-known that if Isogeny is hard (in the worst case), then it is hard for random instances. We extend this result by proving that if any of the above-mentionned classical problems is hard in the worst case, then all of them are hard on average. In particular, if there exist hard instances of Isogeny, then all of Isogeny, EndRing, MaxOrder and HomModule are hard on average.