International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

14 July 2025

Hao Cheng, Georgios Fotiadis, Johann Großschädl, Daniel Page
ePrint Report ePrint Report
Non-degenerate bilinear maps on elliptic curves, commonly referred to as pairings, have many applications including short signature schemes, zero-knowledge proofs and remote attestation protocols. Computing a state-of-the-art pairing at the $128$-bit security level, such as the optimal ate pairing over the curve BLS12-381, is very costly due to the high complexity of some of its sub-operations: most notable are the Miller loop and final exponentiation. In the past ten years, a few optimized pairing implementations have been introduced in the literature, but none of those took advantage of the vector (resp., SIMD) extensions of modern Intel and AMD CPUs, especially AVX-512; this is surprising, because doing so offers the potential to reach significant speed-ups. Consequently, the questions of 1) how computation of the optimal ate pairing can be effectively vectorized, and 2) what execution time such a vectorized implementation can achieve are still open. This paper addresses said questions by introducing a carefully-optimized AVX-512 implementation of the optimal ate pairing on BLS12-381. A central feature of the implementation is the use of $8$-way Integer Fused Multiply-Add (IFMA) instructions, which are capable to execute eight $52 \times 52$-bit multiplications in a SIMD-parallel fashion. We introduce new vectorization strategies and describe optimizations of existing ones to speed up arithmetic operations in the extension fields $\mathbb{F}_{p^4}$, $\mathbb{F}_{p^6}$, and $\mathbb{F}_{p^{12}}$ as well as certain higher-level functions. Furthermore, we discuss some parallelization bottlenecks and how they impact execution time. We benchmarked our pairing software, which we call avxbls, on an Intel Core i3-1005G1 ("Ice Lake") CPU and found that it needs $1,265,314$ clock cycles (resp., $1,195,236$ clock cycles) for the full pairing, with the Granger-Scott cyclotomic squaring (resp., compressed cyclotomic squaring) being used in the final exponentiation. For comparison, the non-vectorized (i.e., scalar) x64 assembly implementation from the widely-used blst library has an execution time of $2,351,615$ cycles, which is $1.86$ times (resp., $1.97$ times) slower. avxbls also outperforms Longa's implementation (CHES 2023) by almost the same factor. The practical importance of these results is amplified by Intel's recent announcement to support AVX10, which includes IFMA instructions, in all future CPUs.
Expand
Mengce Zheng, Abderrahmane Nitaj
ePrint Report ePrint Report
We propose a novel partial key exposure attack on common prime RSA by leveraging lattice-based techniques. In common prime RSA, the primes $p$ and $q$ are defined as $p=2ga+1$ and $q=2gb+1$ for a common prime $g$. Recently, Zheng introduced the first partial key exposure attack on this scheme; however, it is limited to instances where $g > N^{1/4}$. In contrast, our work investigates deeper into partial key exposure attacks by presenting a unified generic case that targets one consecutive unknown block of the private key. By employing a lattice-based solving strategy for trivariate integer polynomials, we can effectively identify additional weak private keys that are vulnerable to partial exposure. Extensive numerical experiments validate the correctness and practicality of our proposed attack on common prime RSA.
Expand
Mengce Zheng, Yansong Feng, Abderrahmane Nitaj, Yanbin Pan
ePrint Report ePrint Report
In this paper, we present a new small private exponent attack on RSA by combining continued fractions and Coppersmith's techniques. Our results improve upon previous bounds, including Herrmann-May's attack, by leveraging a crucial relation derived from continued fraction. Additionally, we extend the range of vulnerable small private exponents by considering the partial leakage of prime factors or their sum. Our main result establishes an improved attack bound $ d < N^{1-\alpha/3-\gamma/2} $, where $ \alpha := \log_{N} e $ and $ \gamma := \log_{N} |p+q-S| $, with $ S $ being an approximation of the prime sum $ p+q $. Furthermore, we explore more applications of our main attack in scenarios where the primes share some most or least significant bits. The validity of our proposed main attack is confirmed through numerical experiments, demonstrating its improved performance over existing attacks.
Expand
Kanwal Batool, Saleem Anwar, Zolt´an Ad´am Mann
ePrint Report ePrint Report
Patient monitoring in hospitals, nursing centers, and home care can be largely automated using cameras and machine-learning-based video analytics, thus considerably increasing the efficiency of patient care. In particular, Facial-expression-based Pain Assessment Systems (FePAS) can automatically detect pain and notify medical personnel. However, current FePAS solutions using cloud-based video analytics offer very limited security and privacy protection. This is problematic, as video feeds of patients constitute highly sensitive information. To address this problem, we introduce SecFePAS, the first FePAS solution with strong security and privacy guarantees. SecFePAS uses advanced cryptographic protocols to perform neural network inference in a privacy-preserving way. To counteract the significant overhead of the used cryptographic protocols, SecFePAS uses multiple optimizations. First, instead of a cloud-based setup, we use edge computing with a 5G connection to benefit from lower network latency. Second, we use a combination of transfer learning and quantization to devise neural networks with high accuracy and optimized inference time. Third, SecFePAS quickly filters out unessential frames of the video to focus the in-depth analysis on key frames. We tested SecFePAS with the SqueezeNet and ResNet50 neural networks on a real pain estimation benchmark. SecFePAS outperforms state-of-the-art FePAS systems in accuracy and optimizes secure processing time.
Expand
George Lu, Brent Waters, David J. Wu
ePrint Report ePrint Report
Registered attribute-based encryption (ABE) enables fine-grained access control to encrypted data without a trusted authority. In this model, users generate their own public keys and register their public key along with a set of attributes with a key curator. The key curator aggregates the public keys into a short master public key that functions as the public key for an ABE scheme.

A limitation of ABE (registered or centralized) is the assumption that a single entity manages all of the attributes in a system. In many settings, the attributes belong to different organizations, making it unrealistic to expect that a single entity manage all of them. In the centralized setting, this motivated the notion of multi-authority ABE, where multiple independent authorities control their individual set of attributes. Access policies are then defined over attributes across multiple authorities.

In this work, we introduce multi-authority registered ABE, where multiple (independent) key curators each manage their individual sets of attributes. Users can register their public keys with any key curator, and access policies can be defined over attributes from multiple key curators. Multi-authority registered ABE combines the trustless nature of registered ABE with the decentralized nature of multi-authority ABE.

We start by constructing a multi-authority registered ABE scheme from composite-order pairing groups. This scheme supports an a priori bounded number of users and access policies that can be represented by a linear secret sharing scheme (which includes monotone Boolean formulas). Our construction relies on a careful integration of ideas from pairing-based registered ABE and multi-authority ABE schemes. We also construct a multi-authority registered ABE scheme that supports an unbounded number of users and arbitrary monotone policies using indistinguishability obfuscation (and function-binding hash functions).
Expand

13 July 2025

Input-Output Global
Job Posting Job Posting

IOG, is a technology company focused on blockchain research and development. We are renowned for our scientific approach to blockchain development, emphasizing peer-reviewed research and formal methods to ensure security, scalability, and sustainability.

What the role involves:

As a Cryptography Engineer you'll contribute to design, implementation, & integration of secure cryptographic protocols across Cardano-related initiatives, such as Cardano Core Cryptographic Primitives, Mithril, ALBA, Leios etc. This role bridges applied research & engineering, focusing on translating cutting-edge cryptographic designs into robust, production-grade systems. The cryptography engineer will collaborate closely with researchers, protocol designers, architects, product managers, & QA teams to ensure cryptographic correctness, performance, and system alignment.

  • Work both independently & in collaboration with distributed teams across multiple time zones, showing initiative and ownership over tasks
  • Design & implement crypto constructions, i.e digital signatures, zero-knowledge proofs, accumulators, commitment schemes
  • Work independently on software development tasks, demonstrating proactive problem-solving skills.
  • Develop & maintain cryptographic libraries (primarily in Rust and Haskell, occasionally in C) with an emphasis on safety, performance, clarity, and auditability
  • Translate cryptographic concepts from academic research into well-structured, reliable implementations that will be used in production systems
  • Contribute to cryptographic design discussions, parameter tuning, & performance benchmarking, particularly for elliptic curve and zk-based constructions
  • Analyze & validate protocol security, ensuring soundness, liveness, and resistance to practical adversaries Write and maintain clear documentation, including developer guides and internal design notes
  • Troubleshoot, debug, and optimize cryptographic code and its interactions with broader systems
  • While the role is remote, applicants must be located in Japan only

    Closing date for applications:

    Contact: Marios Nicolaides

    More information: https://apply.workable.com/io-global/j/70FC5D8A0C/

    Expand

    12 July 2025

    Gildas Avoine, Xavier Carpent, Diane Leblanc-Albarel
    ePrint Report ePrint Report
    Password managers have gained significant popularity and are widely recommended as an effective means of enhancing user security. However, current cloud-based architectures assume that password manager providers are trusted entities. This assumption is never questioned because such password managers are operated by their own designers, which are therefore judge and jury. This exposes users to significant risks, as a malicious provider could perform covert actions without being detected to access or alter users' credentials. This exposes users to significant risks, as a malicious provider could perform covert actions without being detected to access or alter the credentials of users. Most password managers rely solely on the strength of a user-chosen master password. As a result, a covert adversary could conceivably perform large-scale offline attacks to recover credentials protected by weak master passwords. Even more concerning, some password managers do not encrypt credentials on users' devices, transmitting them in plaintext before encrypting them server-side, e.g., Google, in its default configuration. On the other hand, key-protected password managers, e.g., KeePassXC, are less commonly used, as they lack functionality for synchronizing credentials across multiple devices.

    In this paper, we establish a comprehensive set of security properties that should be guaranteed by any cloud-based password manager. We demonstrate that none of the widely deployed mainstream password managers fulfill these fundamental requirements. Nevertheless, we argue that it is feasible to design a solution that is resilient against covert adversaries while allowing users to synchronize their credentials across devices. To support our claims, we propose a password manager design that fulfills all the required properties.
    Expand
    Pierre Civit, Daniel Collins, Vincent Gramoli, Rachid Guerraoui, Jovan Komatovic, Manuel Vidigueira, Pouriya Zarbafian
    ePrint Report ePrint Report
    No $t$-resilient Byzantine Agreement (or Reliable Broadcast) protocol can guarantee agreement among $n$ correct processes in a non-synchronous network if the actual number of faulty processes $f$ is $\geq n - 2t$. This limitation highlights the need to augment such fragile protocols with mechanisms that detect safety violations, such as forensic support and accountability.

    This paper introduces simple and efficient techniques to address this challenge by proposing a new generic transformation, $\mathcal{ABC}^{++}$. The transformation leverages two key primitives: the ratifier and the propagator. By sequentially composing these primitives with any closed-box Byzantine Agreement (or Reliable Broadcast) protocol, $\mathcal{ABC}^{++}$ produces a robust counterpart that provides both (adaptively secure) forensic support and ($1$-delayed adaptively-secure) accountability. The transformation incurs a subquadratic additive communication overhead, with only $1$ round of overhead for decision and forensic support, and $2$ additional rounds for detection in case of a safety violation (or $O\big(\log(n)\big)$ additional rounds with optimized communication).

    The generality of $\mathcal{ABC}^{++}$ offers a compelling general alternative to the subquadratic forensic support solution by Sheng et al. (FC'23) tailored to HotStuff-like protocols, while being more efficient than the (strongly-adaptively-secure) quadratic $\mathcal{ABC}$ accountable transformation (IPDPS'22, JPDC'23). Moreover, it provides the first subquadratic accountable Byzantine Agreement (or Reliable Broadcast) protocols against a ($1$-delayed) adaptive adversary.

    Finally, any subquadratic accountable Reliable Broadcast protocol can be integrated into the $\tau_{scr}$ transformation (ICDCS'22) to produce an improved variant, $\tau_{scr}^{++}$. This new version compiles any deterministic (and even beyond) protocol into its accountable counterpart with subquadratic multiplicative communication overhead, significantly improving upon the original quadratic overhead in $\tau_{scr}$.
    Expand

    11 July 2025

    Suvradip Chakraborty, James Hulett, Dakshita Khurana
    ePrint Report ePrint Report
    An $(\epsilon_\mathsf{s},\epsilon_{\mathsf{zk}})$-weak non-interactive zero knowledge (NIZK) argument has soundness error at most $\epsilon_\mathsf{s}$ and zero-knowledge error at most $\epsilon_{\mathsf{zk}}$. We show that as long as $\mathsf{NP}$ is hard in the worst case, the existence of an $(\epsilon_\mathsf{s}, \epsilon_{\mathsf{zk}})$-weak NIZK proof or argument for $\mathsf{NP}$ with $\epsilon_{\mathsf{zk}} + \sqrt{\epsilon_\mathsf{s}} < 1$ implies the existence of one-way functions. To obtain this result, we introduce and analyze a strong version of universal approximation that may be of independent interest.

    As an application, we obtain NIZK amplification theorems based on very mild worst-case complexity assumptions. Specifically, [Bitansky-Geier, CRYPTO'24] showed that $(\epsilon_\mathsf{s}, \epsilon_{\mathsf{zk}})$-weak NIZK proofs (with $\epsilon_\mathsf{s}$ and $\epsilon_{\mathsf{zk}}$ constants such that $\epsilon_\mathsf{s} + \epsilon_{\mathsf{zk}} < 1$) can be amplified to make their errors negligible, but needed to assume the existence of one-way functions. Our results can be used to remove the additional one-way function assumption and obtain NIZK amplification theorems that are (almost) unconditional; only requiring the mild worst-case assumption that if $\mathsf{NP} \subseteq \mathsf{ioP/poly}$, then $\mathsf{NP} \subseteq \mathsf{BPP}$.
    Expand
    Paula Arnold, Sebastian Berndt, Thomas Eisenbarth, Sebastian Faust, Marc Gourjon, Elena Micheli, Maximilian Orlt, Pajam Pauls, Kathrin Wirschem, Liang Zhao
    ePrint Report ePrint Report
    Rigorous protection against physical attacks which simultaneously and adaptively combine passive side-channel observations with active fault injections is an active and recent area of research. At CRYPTO 2023, Berndt et al. presented the “LaOla” scheme for protecting arbitrary circuits against said attacks. Their constructions use polynomial masking in an optimal least number of shares and come with security proofs based on formal notions of security.

    In this work, we improve the security of this construction significantly by adapting it. We present a new refresh gadget designed specifically for combined attacks. This gadget does not only counteract passive side-channel attacks but additionally randomizes the effect of faults in a detectable but secret-independent manner. We introduce sufficient and attainable security definitions which are stronger than in the work of Berndt et al. to achieve this. Further, we apply the principle to the LaOla construction and prove the stronger security notions for the adapted multiplication gadget, as well as the original properties of composability and strong security against adaptive attacks combining side-channel and faults.
    Expand
    Seunghu Kim, Seongbong Choi, Hyung Tae Lee
    ePrint Report ePrint Report
    Matrix inversion is a fundamental operation, but performing it over encrypted matrices remains a significant challenge. This is mainly due to the fact that conventional inversion algorithms—such as Gaussian elimination—depend heavily on comparison and division operations, which are computationally expensive to perform under homomorphic encryption. To mitigate this, Ahn et al. (ESORICS 2023) introduced an inversion method based on iterative matrix multiplications. However, their approach encrypts matrices entry-wise, leading to poor scalability. A key limitation of prior work stems from the absence of an efficient matrix multiplication technique for matrix-packed ciphertexts, particularly one with low multiplicative depth.

    In this paper, we present a novel homomorphic matrix multiplication algorithm optimized for matrix-packed ciphertexts, requiring only a multiplicative depth of two. Building on this foundation, we propose an efficient algorithm for homomorphic matrix inversion. Experimental results show that our method outperforms the state-of-the-art: for $8\times 8$ matrices, it achieves a $6.8\times$ speedup over the method by Ahn et al., and enables inversion of larger matrices that were previously infeasible. We further compare our homomorphic matrix multiplication technique against existing matrix-packed homomorphic matrix multiplication algorithms. When used for iterative inversion, our method consistently outperforms prior approaches. In particular, for $16\times 16$ and $32\times 32$ matrices, it achieves $1.88\times$ and $1.43\times$ speedups, respectively, over the algorithm by Aikata and Roy. Finally, we demonstrate the practical benefits of our method by applying it to privacy-preserving linear regression. For a dataset of $64$ samples with $8$ features, our approach achieves a $1.13\times$ speedup in training time compared to the state-of-the-art homomorphic matrix inversion solution.
    Expand
    Ahmet Ramazan Ağırtaş, Emircan Çelik, Oğuz Yayla
    ePrint Report ePrint Report
    While digital signatures serve to confirm message integrity and the identity of the signer, the inherent link between the public key and the signer’s identity can pose challenges in anonymized networks or applications focused on preserving privacy. Signatures with randomiz- able keys aim to disentangle the signer’s identity from their public key, thus preserving the signature’s validity. This approach ensures that the signature, even with a randomized key, maintains its verifiability without linking it to the signer’s identity. Although signatures with randomizable keys effectively maintain privacy, additional structural improvements are necessary in specialized signature schemes for complex cryptographic frameworks. Threshold structure- preserving signatures offer a way to construct modular protocols while retaining the benefits of structure-preserving properties. Thus, the ran- domizable key version of it is essential for a wide range of applications, making it the foundation of this work. In this study, signatures with ran- domizable key principles combined with threshold structure-preserving signatures to build a strong cryptographic base for privacy-preserving applications. This foundation makes sure that signatures are valid while also being modular and unlinkable. An earlier version of this work appeared in the 22nd International Con- ference on Security and Cryptography(SECRYPT 2025) [6]; the present article extends that study by adding the formal security proofs of the introduced protocols.
    Expand
    Karthik Garimella, Austin Ebel, Brandon Reagen
    ePrint Report ePrint Report
    Fully Homomorphic Encryption (FHE) is an encryption scheme that allows for computation to be performed directly on encrypted data. FHE effectively closes the loop on secure and outsourced computing; data is encrypted not only during rest and transit, but also during processing. Moreover, modern FHE schemes such as RNS-CKKS (with the canonical slot encoding) encrypt one-dimensional floating-point vectors, which makes such a scheme an ideal candidate for building private machine learning systems. However, RNS-CKKS provides a limited instruction set: SIMD addition, SIMD multiplication, and cyclic rotation of these encrypted vectors. This restriction makes performing multi-dimensional tensor operations (such as those used in machine learning) challenging. Practitioners must pack multi-dimensional tensors into 1-D vectors and map tensor operations onto this one-dimensional layout rather than their traditional nested structure. And while prior systems have made significant strides in automating this process, they often hide critical packing decisions behind layers of abstraction, making debugging, optimizing, and building on top of these systems difficult.

    In this work we ask: can we build an FHE tensor system with a straightforward and transparent packing strategy regardless of the tensor operation? We answer affirmatively and develop a packing strategy based on Einstein summation (einsum) notation. We find einsum notation to be ideal for our approach since the notation itself explicitly encodes the dimensional structure and operation directly into its syntax, naturally exposing how tensors should be packed and manipulated in FHE. We make use of einsum's explicit language to decompose einsum expressions into a fixed set of FHE-friendly operations: dimension expanding and broadcasting, element-wise multiplication, and a reduction along the contraction dimensions.

    We implement our design and present EinHops, which stands for Einsum Notation for Homomorphic Tensor Operations. EinHops is a minimalist system that factors einsum expressions into a fixed sequence of FHE operations, enabling developers to perform complex tensor operations using RNS-CKKS while maintaining full visibility into the underlying packing strategy. We evaluate EinHops on a range of tensor operations from a simple transpose to complex multi-dimensional contractions. We show that the explicit nature of einsum notation allows us to build an FHE tensor system that is simple, general, and interpretable. We open-source EinHops at the following repository: https://github.com/baahl-nyu/einhops.
    Expand
    Yusuf Ozmiş
    ePrint Report ePrint Report
    This paper explores how zero-knowledge proofs can enhance Bitcoin's functionality and privacy. First, we consider Proof-of-Reserve schemes: by using zk-STARKs, a custodian can prove its Bitcoin holdings are more than a predefined threshold X, without revealing addresses or actual balances. We outline a STARK-based protocol for Bitcoin UTXOs and discuss its efficiency. Second, we examine ZK Light Clients, where a mobile or lightweight device verifies Bitcoin's proof-of-work chain using succinct proofs. We propose a protocol for generating and verifying a STARK-based proof of a chain of block headers, enabling trust-minimized client operation. Third, we explore Privacy-Preserving Rollups via BitVM: leveraging BitVM, we design a conceptual rollup that keeps transaction data confidential using zero-knowledge proofs. In each case, we analyze security, compare with existing approaches, and discuss implementation considerations. Our contributions include the design of concrete protocols adapted to Bitcoin's UTXO model and an assessment of their practicality. The results suggest that while ZK proofs can bring powerful features (e.g., on-chain reserve audits, trustless light clients, and private layer-2 execution) to Bitcoin, each application requires careful trade-offs in efficiency and trust assumptions.
    Expand
    Nathan Maillet, Cyrius Nugier, Vincent Migliore, Jean-Christophe Deneuville
    ePrint Report ePrint Report
    HQC is a code-based cryptosystem that has recently been announced for standardization after the fourth round of the NIST post-quantum cryptography standardization process. During this process, the NIST specifically required submitters to provide two kinds of implementation: a reference one, meant to serve lisibility and compliance with the specifications; and an optimized one, aimed at showing the performance of the scheme alongside other desirable properties such as resilience against implementation misuse or side-channel analysis. While most side-channel attacks regarding PQC candidates running in this process were mounted over reference implementations, very few consider the optimized, allegedly side-channel resistant (at least, constant-time), implementations. Unfortunately, HQC optimized version only targets x86-64 with Single Instruction Multiple Data (SIMD) support, which reduces the code portability, especially for non-generalist computers. In this work, we present two power side-channel attacks on the optimized HQC implementation with just the SIMD support deactivated. We show that the power leaks enough information to recover the private key, assuming the adversary can ask the target to replay a legitimate decryption with the same inputs. Under this assumption, we first present a key-recovery attack targeting standard Instruction Set Architectures (ARM T32, RISC-V, x86-64) and compiler optimization levels. It is based on the well known Hamming Distance model of power consumption leakage, and exposes the key from a single oracle call. During execution on a real target, we show that a different leakage, stemming from to the micro-architecture, simplifies the recovery of the private key. This more direct second attack, succeeds with a 99% chance from 83 executions of the same legitimate decryption. While the weakness leveraged in this work seems quite devastating, we discuss simple yet effective and efficient countermeasures to prevent such a key-recovery.
    Expand
    Noor Athamnah, Noga Ron-Zewi, Ron D. Rothblum
    ePrint Report ePrint Report
    Interactive Oracle Proofs (IOPs) form the backbone of some of the most efficient general-purpose cryptographic proof-systems. In an IOP, the prover can interact with the verifier over multiple rounds, where in each round the prover sends a long message, from which the verifier only queries a few symbols.

    State-of-the-art IOPs achieve a linear-size prover and a poly-logarithmic verifier but require a relatively large, logarithmic, number of rounds. While the Fiat-Shamir heuristic can be used to eliminate the need for actual interaction, in modern highly-parallelizable computer architectures such as GPUs, the large number of rounds still translates into a major bottleneck for the prover, since it needs to alternate between computing the IOP messages and the Fiat-Shamir hashes. Motivated by this fact, in this work we study the round complexity of linear-prover IOPs.

    Our main result is an IOP for a large class of Boolean circuits, with only $O(\log^*(S))$ rounds, where $\log^*$ denotes the iterated logarithm function (and $S$ is the circuit size). The prover has linear size $O(S)$ and the verifier runs in time $\mathrm{polylog}(S)$ and has query complexity $O(\log^*(S))$. The protocol is both conceptually simpler, and strictly more efficient, than prior linear prover IOPs for Boolean circuits.
    Expand
    Sayon Duttagupta, Arman Kolozyan, Georgio Nicolas, Bart Preneel, Dave Singelee
    ePrint Report ePrint Report
    The Matter protocol has emerged as a leading standard for secure IoT interoperability, backed by major vendors such as Apple, Google, Amazon, Samsung, and others. With millions of Matter-certified devices already deployed, its security assurances are critical to the safety of global IoT ecosystems. This paper presents the first in-depth security evaluation and formal analysis of Matter’s core protocols, focusing on its Passcode-Authenticated Session Establishment (PASE) and Certificate Authenticated Session Establishment (CASE) mechanisms. While these are based on the well-studied SPAKE2+ and SIGMA respectively, Matter introduces modifications that compromise the original security guarantees. Our analysis reveals multiple cryptographic design flaws, including low-entropy passcodes, static salts, and weak PBKDF2 parameters – all of which contradict Matter’s own threat model and stated security goals. We highlight cases where Matter delegates critical security decisions to vendors, rather than enforcing robust cryptographic practices in the specification, thereby making the system more fragile and susceptible to exploitation. We formally model both standard and Matter-adapted variants of these protocols in ProVerif, confirming several of Matter’s security goals, but disproving others. Our findings go as far as rendering some of Matter’s own mitigations insufficient, exposing all Matter-certified devices to threats classified as “High Risk” in their own documentation. As part of our study, we also discovered previously unknown vulnerabilities in Matter’s public codebase, which we responsibly disclosed to the developers, leading to updates in the codebase.
    Expand
    Xander Pottier, Jan-Pieter D'Anvers, Thomas de Ruijter, Ingrid Verbauwhede
    ePrint Report ePrint Report
    The (Multi-)Scalar multiplication is a crucial operation during FHE-related AI applications, and its performance has a significant impact on the overall efficiency of these applications. In this paper we introduce SMOOTHIE: (Multi-)Scalar Multiplication Optimizations On TFHE, introducing new techniques to improve the performance of single- and multi-scalar multiplications in TFHE. We show that by taking the bucket method, known from the Elliptic Curve field, significant improvements can be made. However, as the characteristics between TFHE and Elliptic Curves differ, we need to adapt this method and introduce novel optimizations. We propose a new negation with offset technique that eliminates direct carry propagation after ciphertext negation. Additionally, we introduce powershift aggregation and bucket merging techniques for the bucket aggregation step, which exploit TFHE properties to substantially reduce bootstrap operations. Specifically, in the multi-scalar multiplication case, we implement a bucket doubling method that eliminates the need for precomputation on each input ciphertext. Our implementation is integrated in the TFHE-rs library and achieves up to ×2.05 speedup for single-scalar multiplication compared to the current state-of-the-art, with multi-scalar multiplication improvements up to ×7.51, depending on the problem size.
    Expand
    Tom Godden, Ruben De Smet, Kris Steenhaut, An Braeken
    ePrint Report ePrint Report
    Online services increasingly require users to verify their identity or parts of it, often by law. This verification is usually performed by processing data from official identity documents, like national identity cards. However, these documents often contain significantly more information than the verifying party needs to know, including information that should stay private. Disclosing this information is a significant privacy and security risk for the user. Traditional work has designed selective disclosure and zero-knowledge proof protocols for such use cases. However, because these require a complete reimplementation, recall and redistribution of existing identity documents, they have never been adopted on a large scale. More recent work has focused on creating zero-knowledge proofs from existing identity documents like the US passport or specific US driver licenses. In this article, we propose an R1CS protocol to efficiently parse and extract fields from existing European National Identity Cards, with an implementation for the Belgian BeID. The protocol is able to prove correct extraction of a date-of-birth field in 22 seconds on a consumer device, with verification taking 230 milliseconds. With this, we aim to provide EU citizens with a practical solution to the privacy and security risks that arise when one has to prove their authenticity or authority to a third party.
    Expand
    Christina Boura, Patrick Derbez, Baptiste Germon, Rachelle Heim Boissier, María Naya-Plasencia
    ePrint Report ePrint Report
    Recently, two independent differential attacks on SPEEDY-7-192 were proposed by Boura et al. and by Beyne and Neyt. Both works present, for the first time, a valid differential attack on SPEEDY-7-192 with time complexities of $2^{186.36}$ and $2^{185}$ respectively. In this note, by extending the search space for 1-round trails, we propose a new differential attack on SPEEDY-7-192 with both data and time complexity of $2^{174.80}$. This improves upon both previous attacks by more than a factor of $2^{10}$.
    Expand
    Next ►