IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
06 December 2024
Qiuyan Du, Qiaohan Chu, Jie Chen, Man Ho Au, Debiao He
Recently, Francati et al. (Asiacrypt 2023) provided the first registered functional encryption (Reg-FE) beyond predicates. Reg-FE addresses the key escrow problem in functional encryption by allowing users to generate their own key pairs, effectively replacing the traditional private-key generator with a key curator. The key curator holds no secret information and runs deterministic algorithms to generate master public key for encryption and helper keys for decryption. However, existing Reg-FE schemes under standard assumptions require fixed data sizes, which limits their practicality in real-world applications.
In this work, we introduce Multi-Function Registered Functional Encryption for Inner-Product (MultiReg-FE for IP), a novel extension of Reg-FE. It enables users to register multiple functions under a single public key. With MultiReg-FE, we achieve both Reg-FE for Unbounded Inner-Product (Unbounded IP), which removes the need to predetermine vector lengths, and Reg-FE for Attribute-Weighted Sums with Inner-Product (AWSw/IP), allowing computations over arbitrary numbers of attribute-value pairs.
All our schemes achieve adaptive-IND-security. Specifically, we present:
-MultiReg-FE for Inner-Product, which supports unbounded number of function vectors from each user.
- Reg-FE for Unbounded Inner-Product, removing the need for preset vector lengths.
- The first Reg-FE for AWSw/IP in public-key settings.
Yibin Yang, Fabrice Benhamouda, Shai Halevi, Hugo Krawczyk, Tal Rabin
We propose plausible post-quantum (PQ) oblivious pseudorandom functions (OPRFs) based on the Power Residue PRF (Damgård CRYPTO’88), a generalization of the Legendre PRF. For security parameter $\lambda$, we consider the PRF $\mathsf{Gold}_k(x)$ that maps an integer $x$ modulo a public prime $p = 2^\lambda\cdot g + 1$ to the element $(k + x)^g \bmod p$, where $g$ is public and $\log g \approx 2\lambda$.
At the core of our constructions are efficient novel methods for evaluating $\mathsf{Gold}$ within two-party computation ($\mathsf{2PC}\text{-}\mathsf{Gold}$), achieving different security requirements. Here, the server $\mathcal{P}_s$ holds the PRF key $k$ whereas the client $\mathcal{P}_c$ holds the PRF input $x$, and they jointly evaluate $\mathsf{Gold}$ in 2PC. $\mathsf{2PC}\text{-}\mathsf{Gold}$ uses standard Vector Oblivious Linear Evaluation (VOLE) correlations and is information-theoretic and constant-round in the (V)OLE-hybrid model. We show:
• For a semi-honest $\mathcal{P}_s$ and a malicious $\mathcal{P}_c$: a $\mathsf{2PC}\text{-}\mathsf{Gold}$ that just uses a single (V)OLE correlation, and has a communication complexity of $3$ field elements ($2$ field elements if we only require a uniformly sampled key) and a computational complexity of $\mathcal{O}(\lambda)$ field operations. We refer to this as half-malicious security.
• For malicious $\mathcal{P}_s$ and $\mathcal{P}_c$: a $\mathsf{2PC}\text{-}\mathsf{Gold}$ that just uses $\frac{\lambda}{4} + \mathcal{O}(1)$ VOLE correlations, and has a communication complexity of $\frac{\lambda}{4} + \mathcal{O}(1)$ field elements and a computational complexity of $\mathcal{O}(\lambda)$ field operations.
These constructions support additional features and extensions, e.g., batched evaluations with better amortized costs where $\mathcal{P}_c$ repeatedly evaluates the PRF under the same key.
Furthermore, we extend $\mathsf{2PC}\text{-}\mathsf{Gold}$ to Verifiable OPRFs and use the methodology from Beullens et al. (ePrint’24) to obtain strong OPRF security in the universally composable setting.
All the protocols are efficient in practice. We implemented $\mathsf{2PC}\text{-}\mathsf{Gold}$—with (PQ) VOLEs—and benchmarked them. For example, our half-malicious (resp. malicious) $n$-batched PQ OPRFs incur about $100$B (resp. $1.9$KB) of amortized communication for $\lambda = 128$ and large enough $n$.
At the core of our constructions are efficient novel methods for evaluating $\mathsf{Gold}$ within two-party computation ($\mathsf{2PC}\text{-}\mathsf{Gold}$), achieving different security requirements. Here, the server $\mathcal{P}_s$ holds the PRF key $k$ whereas the client $\mathcal{P}_c$ holds the PRF input $x$, and they jointly evaluate $\mathsf{Gold}$ in 2PC. $\mathsf{2PC}\text{-}\mathsf{Gold}$ uses standard Vector Oblivious Linear Evaluation (VOLE) correlations and is information-theoretic and constant-round in the (V)OLE-hybrid model. We show:
• For a semi-honest $\mathcal{P}_s$ and a malicious $\mathcal{P}_c$: a $\mathsf{2PC}\text{-}\mathsf{Gold}$ that just uses a single (V)OLE correlation, and has a communication complexity of $3$ field elements ($2$ field elements if we only require a uniformly sampled key) and a computational complexity of $\mathcal{O}(\lambda)$ field operations. We refer to this as half-malicious security.
• For malicious $\mathcal{P}_s$ and $\mathcal{P}_c$: a $\mathsf{2PC}\text{-}\mathsf{Gold}$ that just uses $\frac{\lambda}{4} + \mathcal{O}(1)$ VOLE correlations, and has a communication complexity of $\frac{\lambda}{4} + \mathcal{O}(1)$ field elements and a computational complexity of $\mathcal{O}(\lambda)$ field operations.
These constructions support additional features and extensions, e.g., batched evaluations with better amortized costs where $\mathcal{P}_c$ repeatedly evaluates the PRF under the same key.
Furthermore, we extend $\mathsf{2PC}\text{-}\mathsf{Gold}$ to Verifiable OPRFs and use the methodology from Beullens et al. (ePrint’24) to obtain strong OPRF security in the universally composable setting.
All the protocols are efficient in practice. We implemented $\mathsf{2PC}\text{-}\mathsf{Gold}$—with (PQ) VOLEs—and benchmarked them. For example, our half-malicious (resp. malicious) $n$-batched PQ OPRFs incur about $100$B (resp. $1.9$KB) of amortized communication for $\lambda = 128$ and large enough $n$.
Jake Januzelli, Jiayu Xu
One-more problems like One-More Discrete Logarithm (OMDL) and One-More Diffie--Hellman (OMDH) have found wide use in cryptography, due to their ability to naturally model security definitions for interactive primitives like blind signatures and oblivious PRF. Furthermore, a generalization of OMDH called Threshold OMDH (TOMDH) has proven useful for building threshold versions of interactive protocols. However, due to their complexity it is often unclear how hard such problems actually are, leading cryptographers to analyze them in idealized models like the Generic Group Model (GGM) and Algebraic Group Model (AGM). In this work we give a complete characterization of known group-based one-more problems in the AGM, using the $Q$-DL hierarchy of assumptions defined in the work of Bauer, Fuchsbauer and Loss (CRYPTO '20).
1. Regarding (T)OMDH, we show (T)OMDH is part of the $Q$-DL hierarchy in the AGM; in particular, $Q$-OMDH is equivalent to $Q$-DL. Along the way we find and repair a flaw in the original GGM hardness proof of TOMDH, thereby giving the first correct proof that TOMDH is hard in the GGM.
2. Regarding OMDL, we show the $Q$-OMDL problems constitute an infinite hierarchy of problems in the AGM incomparable to the $Q$-DL hierarchy; that is, $Q$-OMDL is separate from $Q'$-OMDL if $Q' \neq Q$, and also separate from $Q'$-DL unless $Q = Q' = 0$.
1. Regarding (T)OMDH, we show (T)OMDH is part of the $Q$-DL hierarchy in the AGM; in particular, $Q$-OMDH is equivalent to $Q$-DL. Along the way we find and repair a flaw in the original GGM hardness proof of TOMDH, thereby giving the first correct proof that TOMDH is hard in the GGM.
2. Regarding OMDL, we show the $Q$-OMDL problems constitute an infinite hierarchy of problems in the AGM incomparable to the $Q$-DL hierarchy; that is, $Q$-OMDL is separate from $Q'$-OMDL if $Q' \neq Q$, and also separate from $Q'$-DL unless $Q = Q' = 0$.
Christopher Harth-Kitzerow, Georg Carle
Fixed point arithmetic (FPA) is essential to enable practical Privacy-Preserving Machine Learning. When multiplying two fixed-point numbers, truncation is required to ensure that the product maintains correct precision. While multiple truncation schemes based on Secure Multiparty Computation (MPC) have been proposed, which of the different schemes offers the best trade-off between accuracy and efficiency on common PPML datasets and models has remained underexplored.
In this work, we study several different stochastic and exact truncation approaches found in the MPC literature that require different slack sizes, i.e., additional bits required by each secret share to ensure correctness. We provide novel, improved construction for each truncation approach in the semi-honest 3-PC and malicious 4-PC settings, which reduce communication and round complexity up to three times. Moreover, we propose a truncation scheme that does not introduce any communication overhead in the online phase and exactly matches the accuracy of plaintext floating-point PyTorch inference of VGG-16 on the ImageNet dataset with over 80% accuracy using shares with a bitlength of only 32. This is the first time that high PPML accuracy is demonstrated on ImageNet.
In this work, we study several different stochastic and exact truncation approaches found in the MPC literature that require different slack sizes, i.e., additional bits required by each secret share to ensure correctness. We provide novel, improved construction for each truncation approach in the semi-honest 3-PC and malicious 4-PC settings, which reduce communication and round complexity up to three times. Moreover, we propose a truncation scheme that does not introduce any communication overhead in the online phase and exactly matches the accuracy of plaintext floating-point PyTorch inference of VGG-16 on the ImageNet dataset with over 80% accuracy using shares with a bitlength of only 32. This is the first time that high PPML accuracy is demonstrated on ImageNet.
Corentin Jeudy, Olivier Sanders
Gadget-based samplers have proven to be a key component of several cryptographic primitives, in particular in the area of privacy-preserving mechanisms. Most constructions today follow the approach introduced by Micciancio and Peikert (MP) yielding preimages whose dimension linearly grows with that of the gadget. To improve performance, some papers have proposed to truncate the gadget but at the cost of an important feature of the MP sampler, namely the ability to invert arbitrary syndromes. Technically speaking, they replace the worst-case MP sampler by an average-case sampler that can only be used in specific contexts. Far from being a mere theoretical restriction, it prevents the main applications of gadget-based samplers from using truncated variants and thus from benefiting from the associated performance gains.
In this paper, we solve this problem by describing a worst-case sampler that still works with truncated gadgets. Its main strength is that it retains the main characteristics of the MP sampler while providing flexibility in the choice of the truncation parameter. As a consequence, it can be used as a plug-in replacement for all applications relying on the MP sampler so far, leading to performance improvements up to 30% as illustrated by several examples in this paper. Our sampler is supported by a thorough security analysis that addresses the hurdles met by previous works and its practicality is demonstrated by a concrete implementation.
Véronique Cortier, Alexandre Debant, Pierrick Gaudry, Léo Louistisserand
Postal voting is a frequently used alternative to on-site voting. Traditionally, its security relies on organizational measures, and voters have to trust many entities. In the recent years, several schemes have been proposed to add verifiability properties to postal voting, while preserving vote privacy.
Postal voting comes with specific constraints. We conduct a systematic analysis of this setting and we identify a list of generic attacks, highlighting that some attacks seem unavoidable. This study is applied to existing systems of the literature.
We then propose Vote&Check, a postal voting protocol which provides a high level of security, with a reduced number of authorities. Furthermore, it requires only basic cryptographic primitives, namely hash functions and signatures. The security properties are proven in a symbolic model, with the help of the ProVerif tool.
Michael Adjedj, Constantin Blokh, Geoffroy Couteau, Antoine Joux, Nikolaos Makriyannis
We present a novel protocol for two-party ECDSA that achieves two rounds (a single back-and-forth communication) at the cost of a single oblivious linear function evaluation (OLE). In comparison, the previous work of [DKLs18] (S&P 2018) achieves two rounds at the cost of three OLEs, while [BHL24] (Manuscript 2024) requires expensive zero-knowledge proofs on top of the OLE. We demonstrate this by proving that in the generic group model, any adversary capable of generating forgeries for our protocol can be transformed into an adversary that finds preimages for the ECDSA message digest function (e.g., the SHA family). Interestingly, our analysis is closely related to, and has ramifications for, the `presignatures' mode of operation—[CGGMP20] (CCS 2020), [GroSho22] (EUROCRYPT 2022).
Motivated by applications to embedded cryptocurrency wallets, where a single server maintains distinct, shared public keys with separate clients (i.e., a star-shaped topology), and with the goal of minimizing communication, we instantiate our protocol using Paillier encryption and suitable zero-knowledge proofs. To reduce computational overhead, we thoroughly optimize all components of our protocol under sound cryptographic assumptions, specifically small-exponent variants of RSA-style assumptions.
Finally, we implement our protocol and provide benchmarks. At the 128-bit security level, the signing phase requires approximately 50ms of computation time on a standard linux machine, and 2KB of bandwidth.
Motivated by applications to embedded cryptocurrency wallets, where a single server maintains distinct, shared public keys with separate clients (i.e., a star-shaped topology), and with the goal of minimizing communication, we instantiate our protocol using Paillier encryption and suitable zero-knowledge proofs. To reduce computational overhead, we thoroughly optimize all components of our protocol under sound cryptographic assumptions, specifically small-exponent variants of RSA-style assumptions.
Finally, we implement our protocol and provide benchmarks. At the 128-bit security level, the signing phase requires approximately 50ms of computation time on a standard linux machine, and 2KB of bandwidth.
Zhao Minghui, Trevor Yap
Side-Channel Analysis (SCA) exploits physical vulnerabilities in systems to reveal secret keys. With the rise of Internet-of-Things, evaluating SCA attacks has become crucial. Profiling attacks, enhanced by Deep Learning-based Side-Channel Analysis (DLSCA), have shown significant improvements over classical techniques. Recent works demonstrate that ensemble methods outperform single neural networks. However, almost every existing ensemble selection method in SCA only picks the top few best-performing neural networks for the ensemble, which we coined as Greedily-Selected Method (GSM), which may not be optimal.
This work proposes Evolutionary Avenger Initiative (EAI), a genetic algorithm-driven ensemble selection algorithm, to create effective ensembles for DLSCA. We investigate two fitness functions and evaluate EAI across four datasets, including \AES and \ascon implementations. We show that EAI outperforms GSM, recovering secrets with the least number of traces. Notably, EAI successfully recovers secret keys for \ascon datasets where GSM fails, demonstrating its effectiveness.
Jia-Lin Chan, Wai-Kong Lee, Denis C.-K Wong, Wun-She Yap, Bok-Min Goi
Advancements in deep learning (DL) not only revolutionized many aspects in our lives, but also introduced privacy concerns, because it processed vast amounts of information that was closely related to our daily life. Fully Homomorphic Encryption (FHE) is one of the promising solutions to this privacy issue, as it allows computations to be carried out directly on the encrypted data. However, FHE requires high computational cost, which is a huge barrier to its widespread adoption. Many prior works proposed techniques to enhance the speed performance of FHE in the past decade, but they often impose significant memory requirements, which may be up to hundreds of gigabytes. Recently, focus has shifted from purely improving speed performance to managing FHE’s memory consumption as a critical challenge. Rovida and Leporati introduced a technique to minimize rotation key memory by retaining only essential keys, yet this technique is limited to cases with symmetric numerical patterns (e.g., -2 -1 0 1 2), constraining its broader utility. In this paper, a new technique, Adaptive Rotation Key (ARK), is proposed that minimizes rotation key memory consumption by exhaustively analyzing numerical patterns to produce a minimal subset of shared rotation keys. ARK also provides a dual-configuration option, enabling users to prioritize memory efficiency or computational speed. In memory-prioritized mode, ARK reduces rotation key memory consumption by 41.17% with a 12.57% increase in execution time. For speed-prioritized mode, it achieves a 24.62% rotation key memory reduction with only a 0.21% impact on execution time. This flexibility positions ARK as an effective solution for optimizing FHE across varied use cases, marking a significant advancement in optimization strategies for FHE-based privacy-preserving systems.
02 December 2024
Sela Navot, Stefano Tessaro
This paper initiates the study of one-more unforgeability for multi-signatures and threshold signatures as a stronger security goal, ensuring that ℓ executions of a signing protocol cannot result in more than ℓ signatures. This notion is widely used in the context of blind signatures, but we argue that it is a convenient way to model strong unforgeability for other types of distributed signing protocols. We provide formal security definitions for one-more unforgeability (OMUF) and show that the HBMS multi-signature scheme does not satisfy this definition, whereas MuSig and MuSig2 do. We also show that mBCJ multi-signautres do not satisfy OMUF, as well as expose a subtle issue with their existential unforgeability (which does not contradict their original security proof). For threshold signatures, we show that FROST satisfies OMUF, but ROAST does not.
Jakob Burkhardt, Hannah Keller, Claudio Orlandi, Chris Schwiegelshohn
We explore the use of distributed differentially private computations across multiple servers, balancing the tradeoff between the error introduced by the differentially private mechanism and the computational efficiency of the resulting distributed algorithm.
We introduce the linear-transformation model, where clients have access to a trusted platform capable of applying a public matrix to their inputs. Such computations can be securely distributed across multiple servers using simple and efficient secure multiparty computation techniques.
The linear-transformation model serves as an intermediate model between the highly expressive central model and the minimal local model. In the central model, clients have access to a trusted platform capable of applying any function to their inputs. However, this expressiveness comes at a cost, as it is often expensive to distribute such computations, leading to the central model typically being implemented by a single trusted server. In contrast, the local model assumes no trusted platform, which forces clients to add significant noise to their data. The linear-transformation model avoids the single point of failure for privacy present in the central model, while also mitigating the high noise required in the local model.
We demonstrate that linear transformations are very useful for differential privacy, allowing for the computation of linear sketches of input data. These sketches largely preserve utility for tasks such as private low-rank approximation and private ridge regression, while introducing only minimal error, critically independent of the number of clients. Previously, such accuracy had only been achieved in the more expressive central model.
We introduce the linear-transformation model, where clients have access to a trusted platform capable of applying a public matrix to their inputs. Such computations can be securely distributed across multiple servers using simple and efficient secure multiparty computation techniques.
The linear-transformation model serves as an intermediate model between the highly expressive central model and the minimal local model. In the central model, clients have access to a trusted platform capable of applying any function to their inputs. However, this expressiveness comes at a cost, as it is often expensive to distribute such computations, leading to the central model typically being implemented by a single trusted server. In contrast, the local model assumes no trusted platform, which forces clients to add significant noise to their data. The linear-transformation model avoids the single point of failure for privacy present in the central model, while also mitigating the high noise required in the local model.
We demonstrate that linear transformations are very useful for differential privacy, allowing for the computation of linear sketches of input data. These sketches largely preserve utility for tasks such as private low-rank approximation and private ridge regression, while introducing only minimal error, critically independent of the number of clients. Previously, such accuracy had only been achieved in the more expressive central model.
David Pointcheval, Robert Schädlich
Multi-input Attribute-Based Encryption (ABE) is a generalization of key-policy ABE where attributes can be independently encrypted across several ciphertexts, and a joint decryption of these ciphertexts is possible if and only if the combination of attributes satisfies the policy of the decryption key. We extend this model by introducing a new primitive that we call Multi-Client ABE (MC-ABE), which provides the usual enhancements of multi-client functional encryption over multi-input functional encryption. Specifically, we separate the secret keys that are used by the different encryptors and consider the case that some of them may be corrupted by the adversary. Furthermore, we tie each ciphertext to a label and enable a joint decryption of ciphertexts only if all ciphertexts share the same label. We provide constructions of MC-ABE for various policy classes based on SXDH. Notably, we can deal with policies that are not a conjunction of local policies, which has been a limitation of previous constructions from standard assumptions.
Subsequently, we introduce the notion of Multi-Client Predicate Encryption (MC-PE) which, in contrast to MC-ABE, does not only guarantee message-hiding but also attribute-hiding. We present a new compiler that turns any constant-arity MC-ABE into an MC-PE for the same arity and policy class. Security is proven under the LWE assumption.
Subsequently, we introduce the notion of Multi-Client Predicate Encryption (MC-PE) which, in contrast to MC-ABE, does not only guarantee message-hiding but also attribute-hiding. We present a new compiler that turns any constant-arity MC-ABE into an MC-PE for the same arity and policy class. Security is proven under the LWE assumption.
Asmita Adhikary, Giacomo Tommaso Petrucci, Philippe Tanguy, Vianney Lapôtre, Ileana Buhan
Identifying and mitigating vulnerable locations to fault injections requires significant expertise and expensive equipment. Fault injections can damage hardware, cause software crashes, and pose safety and security hazards. Simulating fault injections offers a safer alternative, and fault simulators have steadily developed, though they vary significantly in functionality, target applications, fault injection methods, supported fault models, and guarantees. We present a taxonomy categorizing fault simulators based on their target applications and development cycle stages, from source code to final product. Our taxonomy provides insights and comparisons to highlight open problems.
Kyeongtae Lee, Seongho Park, Byeongjun Jang, Jihye Kim, Hyunok Oh
In this paper, we propose $\textsf{LiLAC}$, a novel field-agnostic, transparent multilinear polynomial commitment scheme (MLPCS) designed to address key challenges in polynomial commitment systems. For a polynomial with $N$ coefficients, $\textsf{LiLAC}$ achieves $\mathcal{O}(N)$ prover time, $\mathcal{O}(\log N)$ verifier time, and $\mathcal{O}(\log N)$ proof size, overcoming the limitations of $\mathcal{O}(\log^2 N)$ verification time and proof size without any increase in other costs. This is achieved through an optimized polynomial commitment strategy and the recursive application of the tensor IOPP, making $\textsf{LiLAC}$ both theoretically optimal and practical for large-scale applications. Furthermore, $\textsf{LiLAC}$ offers post-quantum security, providing robust protection against future quantum computing threats.
We propose two constructions of $\textsf{LiLAC}$: a field-agnostic $\textsf{LiLAC}$ and a field-specific $\textsf{LiLAC}$. Each construction demonstrates superior performance compared to the state-of-the-art techniques in their respective categories of MLPCS. First, the field-agnostic $\textsf{LiLAC}$ is compared against Brakedown (CRYPTO 2023), which is based on a tensor IOP and satisfies field-agnosticity. In experiments conducted over a 128-bit field with a coefficient size of $2^{30}$, the field-agnostic $\textsf{LiLAC}$ achieves a proof size that is $3.7\times$ smaller and a verification speed that is $2.2\times$ faster, while maintaining a similar proof generation time compared to Brakedown. Furthermore, the field-specific $\textsf{LiLAC}$ is evaluated against WHIR (ePrint 2024/1586), which is based on an FRI. With a 128-bit field and a coefficient size of $2^{30}$, the field-specific $\textsf{LiLAC}$ achieves a proof generation speed that is $2.8\times$ faster, a proof size that is $27\%$ smaller, and a verification speed that is $14\%$ faster compared to WHIR.
We propose two constructions of $\textsf{LiLAC}$: a field-agnostic $\textsf{LiLAC}$ and a field-specific $\textsf{LiLAC}$. Each construction demonstrates superior performance compared to the state-of-the-art techniques in their respective categories of MLPCS. First, the field-agnostic $\textsf{LiLAC}$ is compared against Brakedown (CRYPTO 2023), which is based on a tensor IOP and satisfies field-agnosticity. In experiments conducted over a 128-bit field with a coefficient size of $2^{30}$, the field-agnostic $\textsf{LiLAC}$ achieves a proof size that is $3.7\times$ smaller and a verification speed that is $2.2\times$ faster, while maintaining a similar proof generation time compared to Brakedown. Furthermore, the field-specific $\textsf{LiLAC}$ is evaluated against WHIR (ePrint 2024/1586), which is based on an FRI. With a 128-bit field and a coefficient size of $2^{30}$, the field-specific $\textsf{LiLAC}$ achieves a proof generation speed that is $2.8\times$ faster, a proof size that is $27\%$ smaller, and a verification speed that is $14\%$ faster compared to WHIR.
Mojtaba Fadavi, Sabyasachi Karati, Aylar Erfanian, Reihaneh Safavi-Naini
A group signatures allows a user to sign a message anonymously on behalf of a group and provides accountability by using an opening authority who can ``open'' a signature and reveal the signer's identity. Group signatures have been widely used in privacy-preserving applications including anonymous attestation and anonymous authentication. Fully dynamic group signatures allow new members to join the group and existing members to be revoked if needed. Symmetric-key based group signature schemes are post-quantum group signatures whose security rely on the security of symmetric-key primitives such as cryptographic hash functions and pseudorandom functions.
In this paper, we design a symmetric-key based fully dynamic group signature scheme, called DGMT, that redesigns DGM (Buser et al. ESORICS 2019) and removes its two important shortcomings that limit its application in practice: (i) interaction with the group manager for signature verification, and (ii) the need for storing and managing an unacceptably large amount of data by the group manager. We prove security of DGMT (unforgeability, anonymity, and traceability) and give a full implementation of the system. Compared to all known post-quantum group signature schemes with the same security level, DGMT has the shortest signature size. We also analyze DGM signature revocation approach and show that despite its conceptual novelty, it has significant hidden costs that makes it much more costly than using traditional revocation list approach.
In this paper, we design a symmetric-key based fully dynamic group signature scheme, called DGMT, that redesigns DGM (Buser et al. ESORICS 2019) and removes its two important shortcomings that limit its application in practice: (i) interaction with the group manager for signature verification, and (ii) the need for storing and managing an unacceptably large amount of data by the group manager. We prove security of DGMT (unforgeability, anonymity, and traceability) and give a full implementation of the system. Compared to all known post-quantum group signature schemes with the same security level, DGMT has the shortest signature size. We also analyze DGM signature revocation approach and show that despite its conceptual novelty, it has significant hidden costs that makes it much more costly than using traditional revocation list approach.
Nikita Snetkov, Jelizaveta Vakarjuk, Peeter Laud
Smart-ID is an application for signing and authentication provided as a service to residents of Belgium, Estonia, Latvia and Lithuania. Its security relies on multi-prime server-supported RSA, password-authenticated key shares and clone detection mechanism. Unfortunately, the security properties of the underlying protocol have been specified only in ``game-based'' manner. There is no corresponding ideal functionality that the actual protocol is shown to securely realize in the universal composability (UC) framework. In this paper, we remedy that shortcoming, presenting the functionality (optionally parameterized with a non-threshold signature scheme) and prove that the existing Smart-ID protocol securely realizes it. Additionally, we present a server-supported protocol for generating ECDSA signatures and show that it also securely realizes the proposed ideal functionality in the Global Random Oracle Model (UC+GROM).
Seyed MohammadReza Hosseini, Hossein Pilaram
One of the most crucial measures to maintain data security is the use of cryptography schemes and digital signatures built upon cryptographic algorithms. The resistance of cryptographic algorithms against conventional attacks is guaranteed by the computational difficulties and the immense amount of computation required to them. In the last decade, with the advances in quantum computing technology and the realization of quantum computers, which have higher computational power compared to conventional computers and can execute special kinds of algorithms (i.e., quantum algorithms), the security of many existing cryptographic algorithms has been questioned. The reason is that by using quantum computers and executing specific quantum algorithms through them, the computational difficulties of conventional cryptographic algorithms can be reduced, which makes it possible to overcome and break them in a relatively short period of time. Therefore, researchers began efforts to find new quantum-resistant cryptographic algorithms that would be impossible to break, even using quantum computers, in a short time. Such algorithms are called post-quantum cryptographic algorithms. In this article, we provide a comprehensive review of the challenges and vulnerabilities of different kinds of conventional cryptographic algorithms against quantum computers. Afterward, we review the latest cryptographic algorithms and standards that have been proposed to confront the threats posed by quantum computers. We present the classification of post-quantum cryptographic algorithms and digital signatures based on their technical specifications, provide examples of each category, and outline the strengths and weaknesses of each category.
Asier Gambra, Durba Chatterjee, Unai Rioja, Igor Armendariz, Lejla Batina
Voltage fault injection attacks are a particularly powerful threat to secure embedded devices because they exploit brief, hard-to-detect power fluctuations causing errors or bypassing security mechanisms. To counter these attacks, various detectors are employed, but as defenses strengthen, increasingly elusive glitches continue to emerge. Artificial intelligence, with its inherent ability to learn and adapt to complex patterns, presents a promising solution. This research presents an AI-driven voltage fault injection detector that analyzes clock signals directly. We provide a detailed fault characterization of the STM32F410 microcontroller, emphasizing the impact of faults on the clock signal. Our findings reveal how power supply glitches directly impact the clock, correlating closely with the amount of power injected. This led to developing a lightweight Multi-Layer Perceptron model that analyzes clock traces to distinguish between safe executions, glitches that keep the device running but may introduce faults, and glitches that cause the target to reset. While previous fault injection AI applications have primarily focused on parameter optimization and simulation assistance, in this work we use the adaptability of machine learning to create a fault detection model that is specifically adjusted to the hardware that implements it. The developed glitch detector has a high accuracy showing this a promising direction to combat FI attacks on a variety of platform.
Nicholas Brandt, Mia Filić, Sam A. Markelon
Key Transparency (KT) systems have emerged as a critical technology for securely distributing and verifying the correctness of public keys used in end-to-end encrypted messaging services. Despite substantial academic interest, increased industry adoption, and IETF standardization efforts, KT systems lack a holistic and formalized security model, limiting their resilience to practical threats and constraining future development. In this paper, we introduce the first cryptographically sound formalization of KT as an ideal functionality, clarifying the assumptions, security properties, and potential vulnerabilities of deployed KT systems. We identify a significant security concern — a possible impersonation attack by a malicious service provider — and propose a backward-compatible solution. Additionally, we address a core scalability bottleneck by designing and implementing a novel, privacy-preserving verifiable Bloom filter (VBF) that significantly improves KT efficiency without compromising security. Experimental results demonstrate the effectiveness of our approach, marking a step forward in both the theoretical and practical deployment of scalable KT solutions.
Yackolley Amoussou-Guenou, Maurice Herlihy, Maria Potop Butucaru
The paper promotes a new design paradigm for Byzantine tolerant distributed algorithms using trusted abstractions (oracles) specified in a functional manner. The contribution of the paper is conceptual. The objective here is to design distributed fundamental algorithms such as reliable broadcast and asynchronous byzantine consensus using trusted execution environments and to help designers to compare various solutions on a common ground.
In this framework we revisit the Bracha's seminal work on Asynchronous Byzantine Consensus. Our solution uses trusted monotonic counters abstraction and tolerates $t$ Byzantine processes in a system with $n$ processes, $n \geq 2t+1$. The keystone of our construction is a novel and elegant Byzantine Reliable Broadcast algorithm resilient to $t