IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
18 October 2024
Joao F. Doriguello, George Giapitzakis, Alessandro Luongo, Aditya Morolia
ePrint Report
One of the main candidates of post-quantum cryptography is lattice-based cryptography. Its cryptographic security against quantum attackers is based on the worst-case hardness of lattice problems like the shortest vector problem (SVP), which asks to find the shortest non-zero vector in an integer lattice. Asymptotic quantum speedups for solving SVP are known and rely on Grover's search. However, to assess the security of lattice-based cryptography against these Grover-like quantum speedups, it is necessary to carry out a precise resource estimation beyond asymptotic scalings. In this work, we perform a careful analysis on the resources required to implement several sieving algorithms aided by Grover's search for dimensions of cryptographic interests. For such, we take into account fixed-point quantum arithmetic operations, non-asymptotic Grover's search, the cost of using quantum random access memory (QRAM), different physical architectures, and quantum error correction. We find that even under very optimistic assumptions like circuit-level noise of $10^{-5}$, code cycles of 100 ns, reaction time of 1 $\mu$s, and using state-of-the-art arithmetic circuits and quantum error-correction protocols, the best sieving algorithms require $\approx 10^{13}$ physical qubits and $\approx 10^{31}$ years to solve SVP on a lattice of dimension 400, which is roughly the dimension for minimally secure post-quantum cryptographic standards currently being proposed by NIST. We estimate that a 6-GHz-clock-rate single-core classical computer would take roughly the same amount of time to solve the same problem. We conclude that there is currently little to no quantum speedup in the dimensions of cryptographic interest and the possibility of realising a considerable quantum speedup using quantum sieving algorithms would require significant breakthroughs in theoretical protocols and hardware development.
Giuseppe D'Alconzo, Andrea Flamini, Alessio Meneghetti, Edoardo Signorini
ePrint Report
A multi-signature scheme allows a list of signers to sign a common message. They are widely used in scenarios where the same message must be signed and transmitted by $N$ users, and, instead of concatenating $N$ individual signatures, employing a multi-signature can reduce the data to be sent.
In recent years there have been numerous practical proposals in the discrete logarithm setting, such as MuSig2 (CRYPTO'21) for the Schnorr signature. Recently, these attempts have been extended to post-quantum assumptions, with lattice-based proposals such as MuSig-L (CRYPTO'22).
Given the growth of group action-based signatures, a natural question is whether a multi-signature can be built on the same models. In this work, we present the first construction of such a primitive relying on group action assumptions. We obtain a 3-round scheme achieving concurrent security in the ROM.
Moreover, we instantiate it using the three candidates to the additional post-quantum NIST's call, namely LESS, MEDS and ALTEQ, obtaining a good compression rate for different parameters sets.
Dominique Bazin, Ryo Nishimaki
ePrint Report
Bartusek and Raizes (CRYPTO 2024) proposed two security definitions for secret sharing, no-signaling certified deletion and adaptive certified deletion. We prove that adaptive certified deletion does not imply no-signaling certified deletion.
Joohee Lee, Joon-Woo Lee
ePrint Report
Fully homomorphic encryption enables computations over encrypted data, which allows privacy-preserving services to be held between a server and a client. However, real-world applications demand practical considerations, especially concerning public safety and legal investigations. Existing FHE schemes focus solely on privacy, neglecting the societal risks posed by criminal activities utilizing privacy-preserving services. This paper introduces Homomorphic Encryption with Authority (HEwA), a novel framework that balances data privacy with public safety by incorporating an "authority" party. The proposed HEwA system operates in two phases: a normal phase, where client data privacy is protected, and an investigative phase, where the authority referring to a legally authorized entity such as government agencies exerts the right to recover suspicious client’s data. We formalize the security model for HEwA, ensuring that client privacy is protected during the normal phase while enabling authorities to recover encrypted data in the investigative phase. As a concrete example, we design an efficient HEwA system solely based on the CKKS homomorphic encryption scheme, which supports approximate computations over real-number data, making it highly suitable for fruitful applications in AI such as secure genomic analysis. We further provide rigorous security proofs. This new approach addresses the tension between privacy and public safety in cloud services, paving the way for responsible use of homomorphic encryption in practice.
Christof Beierle
ePrint Report
For a $q$-polynomial $L$ over a finite field $\mathbb{F}_{q^n}$, we characterize the differential spectrum of the function $f_L\colon \mathbb{F}_{q^n} \rightarrow \mathbb{F}_{q^n}, x \mapsto x \cdot L(x)$ and show that, for $n \leq 5$, it is completely determined by the image of the rational function $r_L \colon \mathbb{F}_{q^n}^* \rightarrow \mathbb{F}_{q^n}, x \mapsto L(x)/x$. This result follows from the classification of the pairs $(L,M)$ of $q$-polynomials in $\mathbb{F}_{q^n}[X]$, $n \leq 5$, for which $r_L$ and $r_M$ have the same image, obtained in [B. Csajbok, G. Marino, and O. Polverino. A Carlitz type result for linearized
polynomials. Ars Math. Contemp., 16(2):585–608, 2019]. For the case of $n>5$, we pose an open question on the dimensions of the kernels of $x \mapsto L(x) - ax$ for $a \in \mathbb{F}_{q^n}$.
We further present a link between functions $f_L$ of differential uniformity bounded above by $q$ and scattered $q$-polynomials and show that, for odd values of $q$, we can construct CCZ-inequivalent functions $f_M$ with bounded differential uniformity from a given function $f_L$ fulfilling certain properties.
We further present a link between functions $f_L$ of differential uniformity bounded above by $q$ and scattered $q$-polynomials and show that, for odd values of $q$, we can construct CCZ-inequivalent functions $f_M$ with bounded differential uniformity from a given function $f_L$ fulfilling certain properties.
Prabhanjan Ananth, Saachi Mutreja, Alexander Poremba
ePrint Report
Fundamental principles of quantum mechanics have inspired many new research directions, particularly in quantum cryptography. One such principle is quantum no-cloning which has led to the emerging field of revocable cryptography. Roughly speaking, in a revocable cryptographic primitive, a cryptographic object (such as a ciphertext or program) is represented as a quantum state in such a way that surrendering it effectively translates into losing the capability to use this cryptographic object. All of the revocable cryptographic systems studied so far have a major drawback: the recipient only receives one copy of the quantum state. Worse yet, the schemes become completely insecure if the recipient receives many identical copies of the same quantum state---a property that is clearly much more desirable in practice.
While multi-copy security has been extensively studied for a number of other quantum cryptographic primitives, it has so far received only little treatment in context of unclonable primitives. Our work, for the first time, shows the feasibility of revocable primitives, such as revocable encryption and revocable programs, which satisfy multi-copy security in oracle models. This suggest that the stronger notion of multi-copy security is within reach in unclonable cryptography more generally, and therefore could lead to a new research direction in the field.
While multi-copy security has been extensively studied for a number of other quantum cryptographic primitives, it has so far received only little treatment in context of unclonable primitives. Our work, for the first time, shows the feasibility of revocable primitives, such as revocable encryption and revocable programs, which satisfy multi-copy security in oracle models. This suggest that the stronger notion of multi-copy security is within reach in unclonable cryptography more generally, and therefore could lead to a new research direction in the field.
Zehou Wu
ePrint Report
We prove that the existence of a CPA-secure encryption scheme that is insecure in the presence of key cycles of length $n$ implies the existence of such a scheme for key cycles of any length less than $n$. Equivalently, if every encryption scheme in a class is $n$-circular secure and this class is closed under our construction, then every encryption scheme in this class is $n'$-circular secure for $n' > n$.
Chaya Ganesh, Sikhar Patranabis, Shubh Prakash, Nitin Singh
ePrint Report
We propose a generic framework called GAPP for aggregation of polynomial protocols. This allows proving $n$ instances of a polynomial protocol using a single aggregate proof that has $O(\log n)$ size, and can be verified using $O(\log^2 n)$ operations. The satisfiability of several univariate polynomial identities over a domain is reduced to the satisfiability of a single bivariate polynomial identity over a related domain, where the bivariate polynomials interpolate a batch of univariate polynomials over the domain. We construct an information-theoretic protocol for proving the satisfiability of the bivariate polynomial identity, which is then compiled using any bivariate polynomial commitment scheme (PCS) to yield an argument of knowledge for the aggregation relation. GAPP can be applied to several popular SNARKs over bilinear groups that are modeled as polynomial protocols in a black-box way.
We present a new bivariate polynomial commitment scheme, bPCLB, with succinct verification that yields an efficient instantiation of GAPP. In addition, the prover only performs sublinear cryptographic operations in the evaluation proof. Towards constructing bPCLB, we show a new folding technique that we call Lagrangian folding. The bivariate PCS bPCLB and the Lagrangian folding scheme are of independent interest. We implement bPCLB and experimentally validate the practical efficiency of our GAPP instantiation. For the popular PLONK proof system, we achieve $25$-$30\%$ faster proof generation than the naive baseline of generating $n$ separate PLONK proofs. Compared to all existing aggregation schemes that incur additional prover overheads on top of the baseline, we achieve significantly more efficient proving, while retaining succinct verification.
We demonstrate the versatility of our GAPP framework by outlining applications of practical interest: tuple lookups that significantly outperform existing lookup arguments in terms of prover overheads; and proofs for non-uniform computation with a la carte prover cost.
We present a new bivariate polynomial commitment scheme, bPCLB, with succinct verification that yields an efficient instantiation of GAPP. In addition, the prover only performs sublinear cryptographic operations in the evaluation proof. Towards constructing bPCLB, we show a new folding technique that we call Lagrangian folding. The bivariate PCS bPCLB and the Lagrangian folding scheme are of independent interest. We implement bPCLB and experimentally validate the practical efficiency of our GAPP instantiation. For the popular PLONK proof system, we achieve $25$-$30\%$ faster proof generation than the naive baseline of generating $n$ separate PLONK proofs. Compared to all existing aggregation schemes that incur additional prover overheads on top of the baseline, we achieve significantly more efficient proving, while retaining succinct verification.
We demonstrate the versatility of our GAPP framework by outlining applications of practical interest: tuple lookups that significantly outperform existing lookup arguments in terms of prover overheads; and proofs for non-uniform computation with a la carte prover cost.
Mariana Gama, Emad Heydari Beni, Jiayi Kang, Jannik Spiessens, Frederik Vercauteren
ePrint Report
In this paper, we show for the first time it is practical to privately delegate proof generation of zkSNARKs proving up to $2^{20}$ R1CS constraints to a single server. We achieve this by homomorphically computing zkSNARK proof generation, an approach we call blind zkSNARKs. We formalize the concept of blind proofs, analyze their cryptographic properties and show that the resulting blind zkSNARKs remain sound when compiled using BCS compilation. Garg et al. gave a similar framework at CRYPTO 2024, but no practical instantiation for proving non-trivial computations was known. By delegating proof generation, we are able to reduce client computation time from 10 minutes to mere seconds, while server computation time remains limited to 20 minutes. We also propose a practical construction for vCOED supporting constraint sizes four orders of magnitude larger than the current state-of-the-art verifiable FHE-based approaches. These results are achieved by optimizing Fractal for the GBFV homomorphic encryption scheme, e.g. by designing specialized homomorphic circuits with two dimensional NTTs. Furthermore, we make the proofs publicly-verifiable by appending a zero-knowledge Proof of Decryption (PoD). We propose a new construction for PoDs, optimized for low proof generation time, exploiting modulus and ring switching in GBFV; these techniques might be of independent interest. Finally, we implement the latter protocol in C and report on execution time and proof sizes.
Arthur Mehta, Anne Müller
ePrint Report
In a functional encryption (FE) scheme, a user that holds a ciphertext and a function-key can learn the result of applying the function to the plaintext message. Security requires that the user does not learn anything beyond the function evaluation. On the other hand, unclonable encryption (UE) is a uniquely quantum primitive, which ensures that an adversary cannot duplicate a ciphertext to decrypt the same message multiple times. In this work we introduce unclonable quantum functional
encryption (UFE), which both extends the notion of FE to the quantum setting and also possesses the unclonable security of UE.
We give a construction for UFE that supports arbitrary quantum messages and polynomialy-sized circuits, and achieves unclonable-indistinguishable security for dependently sampled function keys. In particular, our UFE guarantees that two parties cannot simultaneously recover the correct function outputs using two independently sampled function keys. Our construction combines quantum garbled circuits [BY22], and quantum-key unclonable encryption [AKY24], and leverages techniques from the plaintext expansion arguments in [Hir+23]. As an application we give the first construction for public-key UE with variable decryption keys.
Lastly, we establish a connection between quantum indistinguishability obfuscation (qiO) and quantum functional encryption (QFE); Showing that any multi-input indistinguishability-secure quantum functional encryption scheme unconditionally implies the existence of qiO.
Jovan Komatovic, Joachim Neu, Tim Roughgarden
ePrint Report
Multi-valued validated Byzantine agreement (MVBA), a fundamental primitive of distributed computing, enables $n$ processes to agree on a valid $\ell$-bit value, despite $t$ faulty processes behaving arbitrarily. Among hash-based protocols for the asynchronous setting with adaptive faults, the state-of-the-art HMVBA protocol
has optimal $O(1)$ time complexity and near-optimal $O(n \ell + n^2 \kappa \log n)$ bit complexity, but tolerates only $t < n/5$ faults. We present REDUCER, an MVBA protocol that matches HMVBA's time and bit complexity and improves resilience to $t < n/4$. Like HMVBA, REDUCER relies solely on collision-resistant hash functions. Toward optimal one-third resilience, we also propose REDUCER++, an MVBA protocol
with further improved $t < (1/3 - \epsilon)n$ resilience, for any fixed $\epsilon > 0$, assuming hash functions modeled as random oracles. Time and bit complexity of REDUCER++ remain constant and quasi-quadratic, respectively, with constants depending on $\epsilon$.
Sebastien Balny, Claire Delaplace, Gilles Dequen
ePrint Report
We present a new variant of the LLL lattice reduction algorithm, inspired by Lagrange notion of pair-wise reduction, called L4. Similar to LLL, our algorithm is polynomial in the dimension of the input lattice, as well as in $\log M$, where $M$ is an upper-bound on the norm of the longest vector of the input basis.
We experimentally compared the norm of the first basis vector obtained with LLL and L4 up to dimension 200. On average we obtain vectors that are up to $16\%$ shorter. We also used our algorithm as a pre-processing step for the BKZ lattice reduction algorithm with blocksize 24. In practice, up to dimension 140, this allows us to reduce the norm of the shortest basis vector on average by $3\%$, while the runtime does not
significantly increases. In $10\%$ of our tests, the whole process was even faster.
Giulia Scaffino, Karl Wüst, Deepak Maram, Alberto Sonnino, Lefteris Kokoris-Kogias
ePrint Report
The increased throughput offered by modern blockchains, such as Sui, Aptos, and Solana, enables processing thousands of transactions per second, but it also introduces higher costs for decentralized application (dApp) developers who need to track and verify changes in the state of their application. This is true because dApp developers run full nodes, which download and re-execute every transaction to track the global state of the chain. However, this becomes prohibitively expensive for high-throughput chains due to high bandwidth, computational, and storage requirements. A common alternative is to use light nodes. However, light nodes only verify the inclusion of a set of transactions and have no guarantees that the set is complete, i.e., that includes all relevant transactions. Under a dishonest majority, light nodes can also be tricked into accepting invalid transactions.
To bridge the gap between full and light nodes, we propose and formalize a new type of blockchain node: the sparse node. A sparse node tracks only a subset of the blockchain’s state: it verifies that the received set of transactions touching the substate is complete, and re-executes those transactions to assess their validity. A sparse node retains important security properties even under adversarial majorities, and requires an amount of resources proportional to the number of transactions in the substate and to the size of the substate itself.
We further present Sunfish, an instantiation of a sparse node protocol. Our analysis and evaluation show that Sunfish reduces the bandwidth consumption of real blockchain applications by several orders of magnitude when compared to a full node.
To bridge the gap between full and light nodes, we propose and formalize a new type of blockchain node: the sparse node. A sparse node tracks only a subset of the blockchain’s state: it verifies that the received set of transactions touching the substate is complete, and re-executes those transactions to assess their validity. A sparse node retains important security properties even under adversarial majorities, and requires an amount of resources proportional to the number of transactions in the substate and to the size of the substate itself.
We further present Sunfish, an instantiation of a sparse node protocol. Our analysis and evaluation show that Sunfish reduces the bandwidth consumption of real blockchain applications by several orders of magnitude when compared to a full node.
Giulia Cavicchioni, Alessio Meneghetti, Giovanni Tognolini
ePrint Report
Information set decoding (ISD) algorithms currently offer the most powerful tool to solve the two archetypal problems of coding theory, namely the Codeword Finding Problem and the Syndrome Decoding Problem. Traditionally, ISD have primarily been studied for linear codes over finite fields, equipped with the Hamming metric.
However, recently, other possibilities have also been explored. These algorithms have been adapted to different ambient spaces and metrics, such as the rank metric or the Lee metric over $\mathbb Z_m$.
In this paper, we show that it is possible to leverage the ring structure to construct more efficient decoding algorithms than those obtained by simply adapting ISD. In particular, we describe a framework that can be applied to any additive metric including Hamming and Lee, and that can be adapted to the case of the rank metric, providing algorithms to solve the two aforementioned problems, along with their average computational costs.
Jules Baudrin, Christof Beierle, Patrick Felke, Gregor Leander, Patrick Neumann, Léo Perrin, Lukas Stennes
ePrint Report
Recently, Baudrin et al. analyzed a special case of Wagner's commutative diagram cryptanalysis, referred to as commutative cryptanalysis. For a family $(E_k)_k$ of permutations on a finite vector space $G$, commutative cryptanalysis exploits the existence of affine permutations $A,B \colon G \rightarrow G$, $I \notin \{A,B\}$ such that $E_k \circ A (x) = B \circ E_k(x)$ holds with high probability, taken over inputs $x$, for a significantly large set of weak keys $k$. Several attacks against symmetric cryptographic primitives can be formulated within the framework of commutative cryptanalysis, most importantly differential attacks, as well as rotational and rotational-differential attacks. Besides, the notion of $c$-differentials on S-boxes can be analyzed as a special case within this framework.
We discuss the relations between a general notion of commutative cryptanalysis, with $A$ and $B$ being arbitrary functions over a finite Abelian group, and differential cryptanalysis, both from the view of conducting an attack on a symmetric cryptographic primitive, as well as from the view of a theoretical study of cryptographic S-boxes.
Guofeng Tang, Shuai Han, Li Lin, Changzheng Wei, Ying Yan
ePrint Report
With the demand of cryptocurrencies, threshold ECDSA recently regained popularity. So far, several methods have been proposed to construct threshold ECDSA, including the usage of OT and homomorphic encryptions (HE). Due to the mismatch between the plaintext space and the signature space, HE-based threshold ECDSA always requires zero-knowledge range proofs, such as Paillier and Joye-Libert (JL) encryptions. However, the overhead of range proofs constitutes a major portion of the total cost.
In this paper, we propose efficient batch range proofs to improve the efficiency of threshold ECDSA. At the heart of our efficiency improvement is a new technical tool called Multi-Dimension Forking Lemma, as a generalization of the well-known general forking lemma [Bellare and Neven, CCS 2006]. Based on our new tool, we construct efficient batch range proofs for Paillier and JL encryptions, and use them to give batch multiplication-to-addition (MtA) protocols, which are crucial to most threshold ECDSA. Our constructions improve the prior Paillier-based MtA by a factor of 2 and the prior JL-based MtA by a factor of 3, in both computation and bandwidth in an amortized way. Our batch MtA can be used to improve the efficiency of most Paillier and JL based threshold ECDSA. As three typical examples, our benchmarking results show: -- We improve the Paillier-based CGGMP20 [Canetti et al., CCS 2020] in bandwidth by a factor of 2.1 to 2.4, and in computation by a factor of 1.5 to 1.7. -- By implementing threshold ECDSA with the batch JL MtA of XAL+23 [Xue et al., CCS 2023] and our batch JL MtA, respectively, our batch construction improves theirs in bandwidth by a factor of 2.0 to 2.29, and in computation by a factor of 1.88 to 2.09. -- When replacing OT-based MtA in DKLs24 [Doerner et al., S$\&$P 2024] with our Paillier-based batch MtA, we improve the bandwidth efficiency by $7.8\times$ at the cost of $5.7\times$ slower computation.
In this paper, we propose efficient batch range proofs to improve the efficiency of threshold ECDSA. At the heart of our efficiency improvement is a new technical tool called Multi-Dimension Forking Lemma, as a generalization of the well-known general forking lemma [Bellare and Neven, CCS 2006]. Based on our new tool, we construct efficient batch range proofs for Paillier and JL encryptions, and use them to give batch multiplication-to-addition (MtA) protocols, which are crucial to most threshold ECDSA. Our constructions improve the prior Paillier-based MtA by a factor of 2 and the prior JL-based MtA by a factor of 3, in both computation and bandwidth in an amortized way. Our batch MtA can be used to improve the efficiency of most Paillier and JL based threshold ECDSA. As three typical examples, our benchmarking results show: -- We improve the Paillier-based CGGMP20 [Canetti et al., CCS 2020] in bandwidth by a factor of 2.1 to 2.4, and in computation by a factor of 1.5 to 1.7. -- By implementing threshold ECDSA with the batch JL MtA of XAL+23 [Xue et al., CCS 2023] and our batch JL MtA, respectively, our batch construction improves theirs in bandwidth by a factor of 2.0 to 2.29, and in computation by a factor of 1.88 to 2.09. -- When replacing OT-based MtA in DKLs24 [Doerner et al., S$\&$P 2024] with our Paillier-based batch MtA, we improve the bandwidth efficiency by $7.8\times$ at the cost of $5.7\times$ slower computation.
Mahimna Kelkar, Yunqi Li, Nerla Jean-Louis, Carolina Ortega Pérez, Kushal Babel, Andrew Miller, Ari Juels
ePrint Report
We introduce superclass accountability, a new notion of accountability for security protocols. Classical notions of accountability typically aim to identify specific adversarial players whose violation of adversarial assumptions has caused a security failure. Superclass accountability describes a different goal: to prove the existence of adversaries capable of violating security assumptions.
We develop a protocol design approach for realizing superclass accountability called the sting framework (SF). Unlike classical accountability, SF can be used for a broad range of applications without making protocol modifications and even when security failures aren’t attributable to particular players.
SF generates proofs of existence for superclass adversaries that are publicly verifiable, making SF a promising springboard for reporting by whistleblowers, high-trust bug-bounty programs, and so forth.
We describe how to use SF to prove the existence of adversaries capable of breaching the confidentiality of practical applications that include Tor, block-building infrastructure in web3, ad auctions, and private contact discovery---as well as the integrity of fair-transaction-ordering systems. We report on two end-to-end SF systems we have constructed---for Tor and block-building---and on experiments with those systems.
We develop a protocol design approach for realizing superclass accountability called the sting framework (SF). Unlike classical accountability, SF can be used for a broad range of applications without making protocol modifications and even when security failures aren’t attributable to particular players.
SF generates proofs of existence for superclass adversaries that are publicly verifiable, making SF a promising springboard for reporting by whistleblowers, high-trust bug-bounty programs, and so forth.
We describe how to use SF to prove the existence of adversaries capable of breaching the confidentiality of practical applications that include Tor, block-building infrastructure in web3, ad auctions, and private contact discovery---as well as the integrity of fair-transaction-ordering systems. We report on two end-to-end SF systems we have constructed---for Tor and block-building---and on experiments with those systems.
Lars Wolfgang Folkerts, Nektarios Georgios Tsoutsos
ePrint Report
Large language models (LLMs) have recently transformed many industries, enhancing content generation, customer service agents, data analysis and even software generation. These applications are often hosted on remote servers to protect the neural-network model IP; however, this raises concerns about the privacy of input queries. Fully Homomorphic Encryption (FHE), an encryption technique that allows for computations on private data, has been proposed as a solution to the challenge. Nevertheless, due to the increased size of LLMs and the computational overheads of FHE, today's practical FHE LLMs are implemented using a split model approach. Here, a user sends their FHE encrypted data to the server to run an encrypted attention head layer; then the server returns the result of the layer for the user to run the rest of the model locally. By employing this method, the server maintains part of their model IP, and the user still gets to perform private LLM inference. In this work, we evaluate the neural-network model IP protections of single layer split model LLMs, and demonstrate a novel attack vector that makes it easy for a user to extract the neural network model IP from the server, bypassing the claimed protections for encrypted computation. In our analysis, we demonstrate the feasibility of this attack, and discuss potential mitigations.
Alexandra Boldyreva, Virendra Kumar, Jiahao Sun
ePrint Report
The paper provides the first provable security analysis of the Butterfly Key Mechanism (BKM) protocol from IEEE 1609.2.1 standard. The BKM protocol specifies a novel approach for efficiently requesting multiple certificates for use in vehicle-to-everything (V2X) communication. We define the main security goals of BKM, such as vehicle privacy and communication authenticity. We prove that the BKM protocol, with small modifications, meets those security goals. We also propose a way to significantly improve the protocol's efficiency without sacrificing security.
Lars Wolfgang Folkerts, Nektarios Georgios Tsoutsos
ePrint Report
Fully Homomorphic Encryption (FHE) is a powerful technology that allows a cloud server to perform computations directly on ciphertexts. To overcome the overhead of sending and storing large FHE ciphertexts, the concept of FHE transciphering was introduced, allowing symmetric key encrypted ciphertexts to be transformed into FHE ciphertexts by deploying symmetric key decryption homomorphically. However, existing FHE transciphering schemes remain unauthenticated and malleable, allowing attackers to manipulate data and remain undetected. This work introduces Proteus, a new methodology for authenticated transciphering, which enables oblivious access control, preventing users from downloading unauthenticated or malicious data. Our protocol implementation adopts ASCON, NIST's new standard for lightweight cryptography, to enable homomorphic hashing and authenticated transciphering. Our ASCON transcipher is paired with the TFHE encryption scheme, which is well suited to perform encrypted rotation and bitwise operations. We evaluate our approach with a variety of real-life privacy-preserving applications, including URL phishing detection, private content moderation of hate speech, and biometric authentication.