IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
23 June 2025
Akasha Shafiq, Abhishek Kesarwani, Dimitrios Vasilopoulos, Paolo Palmieri
Differential privacy (DP) has emerged as a preferred solution for privacy-preserving data analysis, having been adopted by several leading Internet companies. DP is a privacy-preserving mechanism that protects against re-identification of individuals within aggregated datasets. It is known that the privacy budget $\varepsilon$ determines the trade-off between privacy and utility. In this paper, we propose the use of novel set of metrics and an easy-to-implement, step-by-step framework to facilitate the implementation of the DP mechanism on real-world datasets and guide the selection of $\varepsilon$ based on desired accuracy vs utility trade-off. Currently, for a given query there is no widely accepted methodology on how to select $\varepsilon$ and choose the best DP mechanism that offers an optimal trade-off between privacy and utility. In order to address this gap, we perform experiments by considering three real-world datasets, aiming to identify optimal $\varepsilon$ and suitable mechanisms (Laplace or Gaussian) based on privacy utility trade-off as per use case for the commonly used count, sum and average queries for each dataset. Based on our experiment results, we observe that using our metric and framework, one can analyse noise distribution charts of multiple queries, and choose the suitable $\varepsilon$ and the DP mechanism for achieving a balance between privacy and utility. Additionally, we show that the optimal $\varepsilon$ depends on the particular query, desired accuracy and context in which DP is implemented, which suggests that an arbitrary, a-prior selection of $\varepsilon$ cannot provide adequate results. Our framework prioritises the plotting and visualisation of values and results in the DP analysis, making its adoption easy for a wider audience.
Yevgeniy Dodis, Bernardo Magri, Noah Stephens-Davidowitz, Yiannis Tselekounis
Secure messaging protocols allow users to communicate asynchronously over untrusted channels with strong guarantees of privacy, authenticity, forward secrecy, and post-compromise security. However, traditional security analyses of these protocols assume complete trust in the hardware and software of honest participants, overlooking a significant class of real-world threats known as subversion attacks. These attacks alter cryptographic algorithms to compromise security, by exfiltrating secrets or creating vulnerabilities that are often undetected.
The notion of reverse firewalls (EC'15), aims at protecting against subversion attacks by introducing a third party, called a "reverse firewall" (RF), which sits between a party and the outside world and modifies its outgoing and incoming messages in a way such that, even if the party's machine has been corrupted (in a way that maintains functionality), security is still preserved. Importantly, the firewall shares no private information with the parties, and parties put no more trust in the firewall than they do in the communication channel. In this work, we address the existing gap in secure messaging and subversion attacks by presenting several key contributions:
- We design the first subversion-resilient secure messaging protocol based on the model of RF. Our protocol is based on the Signal protocol---the current state-of-the-art in two-party secure messaging, though it lacks subversion resilience---and achieves subversion resilience with only constant overhead over Signal.
- We develop a subversion-resilient version of the X3DH protocol in the RF model. X3DH is a core component that facilitates secure initial key agreement in Signal's protocol.
- We introduce and formalize the notion of Continuous Key Agreement with Tamper Detection, an essential concept for subversion-resilient secure messaging. Our notion enables parties to continuously agree on keys, even in the presence of active adversaries capable of partially tampering with the key exchange transcript. We present a construction of our notion and prove its subversion resilience in the model of RF.
The notion of reverse firewalls (EC'15), aims at protecting against subversion attacks by introducing a third party, called a "reverse firewall" (RF), which sits between a party and the outside world and modifies its outgoing and incoming messages in a way such that, even if the party's machine has been corrupted (in a way that maintains functionality), security is still preserved. Importantly, the firewall shares no private information with the parties, and parties put no more trust in the firewall than they do in the communication channel. In this work, we address the existing gap in secure messaging and subversion attacks by presenting several key contributions:
- We design the first subversion-resilient secure messaging protocol based on the model of RF. Our protocol is based on the Signal protocol---the current state-of-the-art in two-party secure messaging, though it lacks subversion resilience---and achieves subversion resilience with only constant overhead over Signal.
- We develop a subversion-resilient version of the X3DH protocol in the RF model. X3DH is a core component that facilitates secure initial key agreement in Signal's protocol.
- We introduce and formalize the notion of Continuous Key Agreement with Tamper Detection, an essential concept for subversion-resilient secure messaging. Our notion enables parties to continuously agree on keys, even in the presence of active adversaries capable of partially tampering with the key exchange transcript. We present a construction of our notion and prove its subversion resilience in the model of RF.
Alberto Leporati, Lorenzo Rovida, Wessel van Woerden
We suggest a generalization of homomorphic encryption (HE) schemes from a purely geometrical and lattice-based perspective. All the current reference HE schemes are based on the ring version of the Learning with Errors (LWE) problem. In this proposal, we first investigate LWE-based cryptosystems from a lattice point of view and present a framework that allows to obtain the same result, in geometrical terms, from any lattice — as long as it contains a sufficiently short trapdoor vector.
More precisely, we generalize the classical BGV (Brakerski, Gentry and Vaikuntanathan, ITCS '12) and GSW (Gentry, Sahai and Waters, CRYPTO '14) schemes to purely lattice-based variants, which we call Lattice-BGV and Lattice-GSW. By abstracting away the particular hardness assumption, our lattice framework allows to be instantiated with a broader range of lattices and hardness assumptions. For example, LWE gives a natural trapdoor for random $q$-ary lattices, and when plugged into our framework one obtains the original BGV and GSW schemes, while in this work we will also consider an instantiation based on the Lattice Isomorphism Problem (LIP), leading to the first more advanced cryptographic scheme build from LIP$^*$. Our framework also gives a geometrical and natural explanation of HE procedures and generalizes some properties, such as the ability to store many messages in a single ciphertext, one for each short trapdoor vector, without relying on any particular algebraic structure.
$^*$ In a concurrent work Branco, Malavolta and Maradni (ePrint 2025/993) propose an alternative LIP-based FHE construction.
More precisely, we generalize the classical BGV (Brakerski, Gentry and Vaikuntanathan, ITCS '12) and GSW (Gentry, Sahai and Waters, CRYPTO '14) schemes to purely lattice-based variants, which we call Lattice-BGV and Lattice-GSW. By abstracting away the particular hardness assumption, our lattice framework allows to be instantiated with a broader range of lattices and hardness assumptions. For example, LWE gives a natural trapdoor for random $q$-ary lattices, and when plugged into our framework one obtains the original BGV and GSW schemes, while in this work we will also consider an instantiation based on the Lattice Isomorphism Problem (LIP), leading to the first more advanced cryptographic scheme build from LIP$^*$. Our framework also gives a geometrical and natural explanation of HE procedures and generalizes some properties, such as the ability to store many messages in a single ciphertext, one for each short trapdoor vector, without relying on any particular algebraic structure.
$^*$ In a concurrent work Branco, Malavolta and Maradni (ePrint 2025/993) propose an alternative LIP-based FHE construction.
Seunghu Kim, Eymen Ünay, Ayse Yilmazer-Metin, Hyung Tae Lee
Sorting arrays encrypted under fully homomorphic encryption remains a fundamental challenge due to the high cost of private comparisons and the incompatibility of conventional sorting algorithms with the encrypted domain. Recently, Hong et al. (IEEE Transactions on Information Forensics and Security, 2021) proposed a $k$-way sorting network tailored to encrypted real numbers, but its reliance on multiple comparison stages incurs substantial multiplicative depth and significant bootstrapping overhead, even for modest array sizes.
In this work, we propose a novel rank-based sorting algorithm for encrypted real numbers that performs only a single comparison stage, thereby eliminating the need for bootstrapping operations. Our empirical evaluation demonstrates that the proposed method significantly outperforms the $k$-way approach for small to medium array sizes $(n\leq 1024)$, achieving a $46.91\times$ speedup at $n=256$ with a total runtime of $79$ seconds. Furthermore, we examine the recent matrix-based rank sort method by Mazzone et al. (USENIX Security '25) and show that integrating our optimized rank construction improves its efficiency. Specifically, we achieve $1.77\times$ and $2.43\times$ performance gains for $n=128$ and $n=512$, respectively.
In this work, we propose a novel rank-based sorting algorithm for encrypted real numbers that performs only a single comparison stage, thereby eliminating the need for bootstrapping operations. Our empirical evaluation demonstrates that the proposed method significantly outperforms the $k$-way approach for small to medium array sizes $(n\leq 1024)$, achieving a $46.91\times$ speedup at $n=256$ with a total runtime of $79$ seconds. Furthermore, we examine the recent matrix-based rank sort method by Mazzone et al. (USENIX Security '25) and show that integrating our optimized rank construction improves its efficiency. Specifically, we achieve $1.77\times$ and $2.43\times$ performance gains for $n=128$ and $n=512$, respectively.
Oleg Fomenko, Anton Levochko
In 2023, Srinath Setty, Justin Thaler, and Riad Wahby published a paper that describes a novel lookup argument with efficient verification called Lasso. We present a focused and accessible overview of the Lasso lookup argument that stands for the foundational component of the Jolt ZK-VM. This article distills the core principles behind Lasso: the sum-check protocol, multilinear polynomials and their extensions, Spark commitment, offline memory-checking, and the evolution of Spark called Surge. By clarifying the underlying protocols and their relationship to innovations like Spark and Surge, we aim to provide researchers and engineers with practical insights into the cryptographic foundations powering both Lasso and the Jolt virtual machine.
20 June 2025
Eunchan Park, Taeung Yoon, Hocheol Nam, Deepak Maram, Min Suk Kang
In timing-sensitive blockchain applications, such as decentralized finance (DeFi), achieving first-come-first-served (FCFS) transaction ordering among decentralized nodes is critical to prevent frontrunning attacks. Themis[CCS'23], a state-of-the-art decentralized FCFS ordering system, has become a key reference point for high-throughput fair ordering systems for real-world blockchain applications, such as rollup chains and decentralized sequencing, and has influenced the design of several subsequent proposals. In this paper, we critically analyze its core system property of practical batch-order fairness and evaluate the frontrunning resistance claim of Themis. We present the Ambush attack, a new frontrunning technique that achieves nearly 100% success against the practical batch-order fair system with only a single malicious node and negligible attack costs. This attack causes a subtle temporary information asymmetry among nodes, which is allowed due to the heavily optimized communication model of the system. A fundamental trade-off we identify is a challenge in balancing security and performance in these systems; namely, enforcing timely dissemination of transaction information among nodes (to mitigate frontrunning) can easily lead to non-negligible network overheads (thus, degrading overall throughput performance). We show that it is yet possible to balance these two by delaying transaction dissemination to a certain tolerable level for frontrunning mitigation while maintaining high throughput. Our evaluation demonstrates that the proposed delayed gossiping mechanism can be seamlessly integrated into existing systems with only minimal changes.
Mizuki Hayashi, Keita Emura
Gao et al. (IEEE Internet of Things Journal 2024) proposed public-key inverted-index keyword search with designated tester as an extension of public key encryption with keyword search (PEKS). In their scheme, a server (a tester) has a secret key and uses the key for running the search algorithm due to the designated tester setting. They proved that no information of keyword is revealed from trapdoors under the decisional Diffie-Hellman (DDH) assumption. However, they also employed a symmetric pairing which can be seen as a DDH-solver. Thus, it is expected that information of keyword is revealed from trapdoors since the underlying complexity assumption does not hold. In this paper, we demonstrate two attacks against the Gao et al.'s scheme where information of keyword is revealed from a trapdoor. The first attack completes by using only the server's secret key in addition to the challenge trapdoor, without any additional encryption/trapdoor queries. We remark that an adversary is not allowed to obtain the server's secret key in their security model, and our attack is outside of their security model. Thus, we discuss the roles of the server, and stress that our attack scenario is reasonable. The second attack does not employ the server's secret key and utilizes linkability of two trapdoors. In both attacks, the attack complexity is just two pairing computations and is feasible in terms of the computational cost.
Giacomo Borin, Sofía Celi, Rafael del Pino, Thomas Espitau, Guilhem Niot, Thomas Prest
Threshold signatures enable multiple participants to collaboratively produce a digital signature, ensuring both fault tolerance and decentralization. As we transition to the post-quantum era, lattice-based threshold constructions have emerged as promising candidates. However, existing approaches often struggle to scale efficiently, lack robustness guarantees, or are incompatible with standard schemes — most notably, the NIST-standard ML-DSA.
In this work, we explore the design space of Fiat-Shamir-based lattice threshold signatures and introduce the two most practical schemes to date. First, we present an enhanced TRaccoon-based [DKM+24] construction that supports up to 64 participants with identifiable aborts, leveraging novel short secret-sharing techniques to achieve greater scalability than previous state-of-the-art methods. Second — and most importantly — we propose the first practical ML-DSA-compatible threshold signature scheme, supporting up to 6 users.
We provide full implementations and benchmarks of our schemes, demonstrating their practicality and efficiency for real-world deployment as protocol messages are computed in at most a few milliseconds, and communication cost ranges from 10.5 kB to 525 kB depending on the threshold.
Stefan Milius, Dominik Paulus, Dominique Schröder, Lutz Schröder, Julian Thomas
Message Authentication Codes (MACs) represent a fundamental symmetric key primitive, serving to ensure the authenticity and integrity of transmitted data.
As a building block in authenticated encryption and in numerous deployed standards, including TLS, IPsec, and SSH, MACs play a central role in practice.
Due to their importance for practice, MACs have been subject to extensive research, leading to prominent schemes such as HMAC, CBCMAC, or LightMAC.
Despite the existence of various MACs, there is still considerable interest in creating schemes that are more efficient, potentially parallelizable, or have specific non-cryptographic attributes, such as being patent-free.
In this context, we introduce an automated method for analyzing and synthesizing MAC schemes. In order to achieve this goal, we have constructed a framework that restricts the class of MACs in such a way that it is sufficiently expressive to cover known constructions, yet also admits automated reasoning about the security guarantees of both known and new schemes. Our automated analysis has identified a novel category of MACs, termed "hybrid" MACs. These MACs operate by processing multiple blocks concurrently, with each block managed by a different, specified MAC scheme. A key finding is that in certain scenarios, the hybrid MAC marginally outperforms the simultaneous operation of the individual MACs. This improvement is attributed to the hybrid approach exploiting the strengths and compensating for the weaknesses of each distinct MAC scheme involved. Our implementation confirms that we have successfully identified new schemes that have comparable performance with state-of-the-art schemes and in some settings seem to be slightly more efficient.
In this context, we introduce an automated method for analyzing and synthesizing MAC schemes. In order to achieve this goal, we have constructed a framework that restricts the class of MACs in such a way that it is sufficiently expressive to cover known constructions, yet also admits automated reasoning about the security guarantees of both known and new schemes. Our automated analysis has identified a novel category of MACs, termed "hybrid" MACs. These MACs operate by processing multiple blocks concurrently, with each block managed by a different, specified MAC scheme. A key finding is that in certain scenarios, the hybrid MAC marginally outperforms the simultaneous operation of the individual MACs. This improvement is attributed to the hybrid approach exploiting the strengths and compensating for the weaknesses of each distinct MAC scheme involved. Our implementation confirms that we have successfully identified new schemes that have comparable performance with state-of-the-art schemes and in some settings seem to be slightly more efficient.
Nick Aquina, Simon Rommel, Idelfonso Tafur Monroy
The Q-problem has been introduced as a new post-quantum hard problem. We present two man-in-the-middle and three key recovery attacks against the key exchange protocol based on the Q-problem. The man-in-the-middle attacks take negligible time and allow the attacker to recover the exchanged key. The most effective key recovery attack has a computational complexity of $2^{40}$. We also propose countermeasures against all attacks.
Alexander Bienstock, Leo de Castro, Daniel Escudero, Antigoni Polychroniadou, Akira Takahashi
A threshold signature is an advanced protocol that splits a secret signing key among multiple parties, allowing any subset above a threshold to jointly generate a signature. While post-quantum (PQ) threshold signatures are actively being studied --- especially in response to NIST's recent call for threshold schemes --- most existing solutions are tailored to specially designed, threshold-friendly signature schemes. In contrast, many real-world applications, such as distributed certificate authorities and digital currencies, require signatures that remain verifiable under the standardized verification procedures already in use. Given NIST's recent standardization of PQ signatures and ongoing industry deployment efforts, designing an efficient threshold scheme that interoperates with NIST-standardized verification remains a critical open problem.
In this work, we present the first efficient and scalable solution for multi-party generation of the module-lattice digital signature algorithm (ML-DSA), one of NIST's PQ signature standards. Our contributions are two-fold. First, we present a variant of the ML-DSA signing algorithm that is amenable to efficient multi-party computation (MPC) and prove that this variant achieves the same security as the original ML-DSA scheme. Second, we present several efficient & scalable MPC protocols to instantiate the threshold signing functionality. Our protocols can produce threshold signatures with as little as 100 KB (per party) of online communication per rejection-sampling round. In addition, we instantiate our protocols in the honest-majority setting, which allows us to avoid any additional public key assumptions.
The signatures produced by our protocols verify under the same implementation of ML-DSA verification for all three security levels. Thus, signatures and verification keys of our scheme are (naturally) the same size as that of ML-DSA; previous lattice-based threshold schemes could not match both of these sizes. Overall, our solution is the only method for producing threshold ML-DSA signatures compatible with NIST-standardized verification that scales to an arbitrary number of parties, without any new assumptions.
In this work, we present the first efficient and scalable solution for multi-party generation of the module-lattice digital signature algorithm (ML-DSA), one of NIST's PQ signature standards. Our contributions are two-fold. First, we present a variant of the ML-DSA signing algorithm that is amenable to efficient multi-party computation (MPC) and prove that this variant achieves the same security as the original ML-DSA scheme. Second, we present several efficient & scalable MPC protocols to instantiate the threshold signing functionality. Our protocols can produce threshold signatures with as little as 100 KB (per party) of online communication per rejection-sampling round. In addition, we instantiate our protocols in the honest-majority setting, which allows us to avoid any additional public key assumptions.
The signatures produced by our protocols verify under the same implementation of ML-DSA verification for all three security levels. Thus, signatures and verification keys of our scheme are (naturally) the same size as that of ML-DSA; previous lattice-based threshold schemes could not match both of these sizes. Overall, our solution is the only method for producing threshold ML-DSA signatures compatible with NIST-standardized verification that scales to an arbitrary number of parties, without any new assumptions.
Dipayan Saha, Shams Tarek, Hasan Al Shaikh, Khan Thamid Hasan, Pavan Sai Nalluri, Md. Ajoad Hasan, Nashmin Alam, Jingbo Zhou, Sujan Kumar Saha, Mark Tehranipoor, Farimah Farahmandi
Ensuring the security of complex system-on-chips (SoCs) designs is a critical imperative, yet traditional verification techniques struggle to keep pace due to significant challenges in automation, scalability, comprehensiveness, and adaptability. The advent of large language models (LLMs), with their remarkable capabilities in natural language understanding, code generation, and advanced reasoning, presents a new paradigm for tackling these issues. Moving beyond monolithic models, an agentic approach allows for the creation of multi-agent systems where specialized LLMs collaborate to solve complex problems more effectively. Recognizing this opportunity, we introduce SV-LLM, a novel multi-agent assistant system designed to automate and enhance SoC security verification. By integrating specialized agents for tasks like verification question answering, security asset identification, threat modeling, test plan and property generation, vulnerability detection, and simulation-based bug validation, SV-LLM streamlines the workflow. To optimize their performance in these diverse tasks, agents leverage different learning paradigms, such as in-context learning, fine-tuning, and retrieval-augmented generation (RAG). The system aims to reduce manual intervention, improve accuracy, and accelerate security analysis, supporting proactive identification and mitigation of risks early in the design cycle. We demonstrate its potential to transform hardware security practices through illustrative case studies and experiments that showcase its applicability and efficacy.
Patrick Karl, Francesco Antognazza, Alessandro Barenghi, Gerardo Pelosi, Georg Sigl
A significant effort in designing and engineering post-quantum cryptosystems is currently ongoing, also as a result of the National Institute of Standards and Technology (NIST) Post-quantum Cryptography (PQC) standardization process that started in 2016 and recently completed selecting two Key Encapsulation Mechanisms (KEMs), CRYSTALS-Kyber and HQC, and three digital signatures CRYSTALS-Dilithium, Falcon, and SPHINCS+ for standardization. In 2022, NIST launched another standardization effort for additional post-quantum digital signatures, preferably not based on the security assumptions of structured lattices, and with performance better than or equal to that of already standardized schemes (e.g., SPHINCS+ ). This initiative has narrowed down the initial 40 candidates to 14 in October 2024, eliciting public scrutiny of their algorithms and technical evaluation of their performance figures. Among the schemes admitted to the second round of evaluation, the code-based CROSS signature scheme was praised for its speed and the noticeably smaller signature sizes over the standardized version of SPHINCS+ . In this work, we present the first RTL hardware design of CROSS tailored for FPGA devices, delineating efficient implementation strategies for the critical components of the cryptographic scheme. Depending on the chosen security level, our design generates a key pair in 9 to 152 µs, signs a message in 404 µs to 5.89 ms, and verifies a signature in 322 µs to 4.38 ms on the NIST reference FPGA, a Xilinx Artix-7 device, proving competitive when compared with other candidates in the on-ramp standardization effort, namely LESS, MEDS, MAYO, Raccoon and SDitH, and comparable to current standard-selected ML-DSA, FN-DSA, and SLH-DSA in terms of efficiency.
Francesca Falzon, Harjasleen Malvai, Emanuel Opel
Authenticated dictionaries (ADs) enable secure lookups to a dictionary hosted by an untrusted server and are a key component of various real-world applications, including transparency systems and cryptocurrencies. Despite significant overlap in techniques for building ADs and related primitives, such as memory checkers and accumulators (i.e., authenticated sets), these relationships have yet to be formalized.
In this work, we give a rigorous treatment of ADs and prove their precise connection to the latter two cornerstone primitives. We start by laying out the minimal algorithms and security properties needed in practice and introduce a new security notion for ADs called write-committing, which requires update proofs to guarantee an exact count of changes.
We prove that any AD built from a black-box authenticated set (AS) makes at least $\Omega(\log n)$ AS calls per lookup and obeys a trade-off between lookups and updates. With optimal lookups, such a scheme requires at least $\Omega(\log n/\log\log n)$ AS calls per update. We also resolve the open question of constructing a secure AD from only black-box access to an AS and present two schemes adhering to the trade-off: one with optimal lookup overhead and the other with higher lookup complexity, but which only requires two AS calls for an update.
Finally, we make strides towards unifying memory checkers and ADs. To this end, we present two constructions for memory checkers with black-box access to an AD: one that incurs constant overhead (but needs write-committing) and a second that only requires the AD to be lookup-secure but incurs logarithmic overhead. We then give a simple AD construction using a memory checker as a black-box, with $\mathcal{O}(1)$ overhead.
Our results demonstrate the inherent limitations of ADs built from accumulators but lay the foundation for extending existing results on memory checkers and other primitives, such as vector commitments, to ADs.
In this work, we give a rigorous treatment of ADs and prove their precise connection to the latter two cornerstone primitives. We start by laying out the minimal algorithms and security properties needed in practice and introduce a new security notion for ADs called write-committing, which requires update proofs to guarantee an exact count of changes.
We prove that any AD built from a black-box authenticated set (AS) makes at least $\Omega(\log n)$ AS calls per lookup and obeys a trade-off between lookups and updates. With optimal lookups, such a scheme requires at least $\Omega(\log n/\log\log n)$ AS calls per update. We also resolve the open question of constructing a secure AD from only black-box access to an AS and present two schemes adhering to the trade-off: one with optimal lookup overhead and the other with higher lookup complexity, but which only requires two AS calls for an update.
Finally, we make strides towards unifying memory checkers and ADs. To this end, we present two constructions for memory checkers with black-box access to an AD: one that incurs constant overhead (but needs write-committing) and a second that only requires the AD to be lookup-secure but incurs logarithmic overhead. We then give a simple AD construction using a memory checker as a black-box, with $\mathcal{O}(1)$ overhead.
Our results demonstrate the inherent limitations of ADs built from accumulators but lay the foundation for extending existing results on memory checkers and other primitives, such as vector commitments, to ADs.
Dan Boneh, Trisha Datta, Rex Fernando, Kamilla Nazirkhanova, Alin Tomescu
Let $p$ be a prime and consider a committed vector $\vec{v} = (v_1, \ldots, v_m) \in \mathbb{F}_p^m$.
We develop new techniques for succinctly proving in zero-knowledge that all the elements of $\vec{v}$ are in the range $\{0,1,\ldots,n\}$ for some $n
Robin Linus, Lukas Aumayr, Zeta Avarikioti, Matteo Maffei, Andrea Pelosi, Orfeas Thyfronitis Litos, Christos Stefo, David Tse, Alexei Zamyatin
A holy grail in blockchain infrastructure is a trustless bridge between Bitcoin and its second layers or other chains. We make progress toward this vision by introducing the first light-client based Bitcoin bridge. At the heart of its design lies BitVM2-core, a novel paradigm that enables arbitrary program execution on Bitcoin, combining Turing-complete expressiveness with the security of Bitcoin consensus. BitVM2-bridge advances prior approaches by reducing the trust assumption from an honest majority (t-of-n) to existential honesty (1-of-n) during setup. Liveness is guaranteed with only one rational operator, and any user can act as a challenger, enabling permissionless verification. A production-level implementation of BitVM2 has been developed and a full challenge verification has been executed on the Bitcoin mainnet.
Klaus Dohmen, Mandy Lange-Geisler
Based on a sharpening of Carmichael’s theorem and Euler’s theorem we generalize RSA and CRT-RSA to arbitrary integer moduli $n > 1$, while restricting messages to integers $m$ which are regular modulo $n$, meaning that they have a John-von-Neumann pseudoinverse modulo $n$. We prove the correctness and completeness of our encryption and decryption method for this class of messages, and show that the restriction to regular integers is negligible, since under reasonable assumptions almost all integers are regular modulo $n$.
Cameron Foreman, Lewis Wooltorton, Kevin Milner, Florian J. Curchod
Randomness extractors are algorithms that distill weak random sources into near-perfect random numbers. Two-source extractors enable this distillation process by combining two independent weak random sources. Raz’s extractor (STOC '05) was the first to achieve this in a setting where one source has linear min-entropy (i.e., proportional to its length), while the other has only logarithmic min-entropy in its length. However, Raz's original construction is impractical due to a polynomial computation time of at least degree 4. Our work solves this problem by presenting an improved version of Raz's extractor with quasi-linear computation time, as well as a new analytic theorem with reduced entropy requirements. We provide comprehensive analytical and numerical comparisons of our construction with others in the literature, and we derive strong and quantum-proof versions of our efficient Raz extractor. Additionally, we offer an easy-to-use, open-source code implementation of the extractor and a numerical parameter calculation module.
Andrew Mendelsohn, Charles Grover, Cong Ling
We propose a dimension-reducing transformation on Group Ring Learning with Errors (GRLWE) samples. We exhibit an efficiently computable isomorphism which takes samples defined over the group rings used in the construction of GRLWE to twice as many samples defined over matrix rings, in half the dimension. This is done by composing two maps: the first map is a transformation showing that the group rings used are orders in central simple algebras, and the second map takes the obtained central simple algebra to a matrix ring. When combined with lattice reduction on the resulting matrix samples, this gives an attack on the GRLWE problem. We extend this attack to other groups proposed for cryptographic use by the creators of GRLWE, and display some numerical results quantifying the effects of the transformation, using the `Lattice Estimator'. We then give a family of groups from which GRLWE-style group rings can be constructed which are immune to our attack, namely the generalized quaternion groups. Finally, we discuss the merits and vulnerabilities of a number of different forms of structured LWE.
Maria Corte-Real Santos, Jonathan Komada Eriksen, Antonin Leroux, Michael Meyer, Lorenz Panny
We present several new algorithms to evaluate modular polynomials of level $\ell$ modulo a prime $p$ on an input $j$.
More precisely, we introduce two new generic algorithms, sharing the following similarities: they are based on a CRT approach; they make use of supersingular curves and the Deuring correspondence; and, their memory requirements are optimal.
The first algorithm combines the ideas behind a hybrid algorithm of Sutherland in 2013 with a recent algorithm to compute modular polynomials using supersingular curves introduced in 2023 by Leroux. The complexity (holding around several plausible heuristic assumptions) of the resulting algorithm matches the $O(\ell^3 \log^{3} \ell + \ell \log p)$ time complexity of the best known algorithm by Sutherland, but has an optimal memory requirement. Our second algorithm is based on a sub-algorithm that can evaluate modular polynomials efficiently on supersingular $j$-invariants defined over $\mathbb{F}_p$, and achieves heuristic complexity quadratic in both $\ell$ and $\log j$, and linear in $\log p$. In particular, it is the first generic algorithm with optimal memory requirement to obtain a quadratic complexity in~$\ell$. Additionally, we show how to adapt our method to the computation of other types of modular polynomials such as the one stemming from Weber's function. Finally, we provide an optimised implementation of the two algorithms detailed in this paper, though we emphasise that various modules in our codebase may find applications outside their use in this paper.
The first algorithm combines the ideas behind a hybrid algorithm of Sutherland in 2013 with a recent algorithm to compute modular polynomials using supersingular curves introduced in 2023 by Leroux. The complexity (holding around several plausible heuristic assumptions) of the resulting algorithm matches the $O(\ell^3 \log^{3} \ell + \ell \log p)$ time complexity of the best known algorithm by Sutherland, but has an optimal memory requirement. Our second algorithm is based on a sub-algorithm that can evaluate modular polynomials efficiently on supersingular $j$-invariants defined over $\mathbb{F}_p$, and achieves heuristic complexity quadratic in both $\ell$ and $\log j$, and linear in $\log p$. In particular, it is the first generic algorithm with optimal memory requirement to obtain a quadratic complexity in~$\ell$. Additionally, we show how to adapt our method to the computation of other types of modular polynomials such as the one stemming from Weber's function. Finally, we provide an optimised implementation of the two algorithms detailed in this paper, though we emphasise that various modules in our codebase may find applications outside their use in this paper.