International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

31 January 2025

Reuven Yakar, Avishai Wool, Eyal Ronen
ePrint Report ePrint Report
Overclocking is a a supported functionality of Nvidia GPUs, and is a common performance enhancement practice. However, overclocking poses a danger for cryptographic applications. As the temperature in the overclocked GPU increases, spurious computation faults occur. Coupled with well known fault attacks against RSA implementations, one can expect such faults to allow compromising RSA private keys during decryption or signing.

We first validate this hypothesis: We evaluate two commercial-grade GPU-based implementations of RSA within openSSL (called RNS and MP), under a wide range of overclocking levels and temperatures, and demonstrate that both implementations are vulnerable.

However, and more importantly, we show for the first time that even if the GPU is benignly overclocked to a seemingly ``safe'' rate, a successful attack can still be mounted, over the network, by simply sending requests at an aggressive rate to increase the temperature. Hence, setting any level of overclocking on the GPU is risky.

Moreover, we observe a huge difference in the implementations' vulnerability: the rate of RSA breaks for RNS is 4 orders of magnitude higher than that of MP. We attribute this difference to the implementations' memory usage patterns: RNS makes heavy use of the GPU's global memory, which is accessed via both the Unified (L1) cache and the L2 cache; MP primarily uses ``shared'' on-chip memory, which is local to each GPU Streaming MultiProcessor (SM) and is uncached, utilizing the memory banks used for the L1 cache. We believe that the computation faults are caused by reads from the global memory, which under a combination of overclocking, high temperature and high memory contention, occasionally return stale values.
Expand
George Kadianakis, Arantxa Zapico, Hossein Hafezi, Benedikt Bunz
ePrint Report ePrint Report
Accumulation schemes are powerful primitives that enable distributed and incremental verifiable computation with less overhead than recursive SNARKs. However, existing schemes with constant-size accumulation verifiers, suffer from linear-sized accumulators and deciders, leading to linear-sized proofs that are unsuitable in distributed settings. Motivated by the need for bandwidth efficient accountable voting protocols, (I) We introduce KZH, a novel polynomial commitment scheme, and (II) KZH-fold, the first sublinear accumulation scheme where the verifier only does $3$ group scalar multiplications and $O(n^{1/2})$ accumulator size and decider time. Our scheme generalizes to achieve accumulator and decider complexity of $k \cdot n^{1/k}$ with verifier complexity $k$. Using the BCLMS compiler, (III) we build an IVC/PCD scheme with sublinear proof and decider. (IV) Next, we propose a new approach to non-uniform IVC, where the cost of proving a step is proportional only to the size of the step instruction circuit, and unlike previous approaches, the witness size is not linear in the number of instructions. (V) Leveraging these advancements, we demonstrate the power of KZH-fold by implementing an accountable voting scheme using a novel signature aggregation protocol supporting millions of participants, significantly reducing communication overhead and verifier time compared to BLS-based aggregation. We implemented and benchmarked our protocols and KZH-fold achieves a 2000x reduction in communication and a 50x improvement in decider time over Nova when proving 2000 Poseidon hashes, at the cost of 3x the prover time.
Expand
Simon Holmgaard Kamp
ePrint Report ePrint Report
We translate the expand-and-extract framework by Fitzi, Liu-Zhang, and Loss (PODC 21) to the asynchronous setting. While they use it to obtain a synchronous BA with $2^{-\lambda}$ error probability in $\lambda+1$ rounds, we make it work in asynchrony in $\lambda+3$ rounds. At the heart of their solution is a generalization of crusader agreement and graded agreement to any number of grades called proxcensus. They achieve graded consensus with $2^r+1$ grades in $r$ rounds by reducing proxcensus with $2s-1$ grades to proxcensus with $s$ grades in one round. The expand-and-extract paradigm uses proxcensus to expand binary inputs to $2^\lambda+1$ grades in $\lambda$ rounds before extracting a binary output by partitioning the grades using a $\lambda$ bit common coin. However, the proxcensus protocol by Fitzi et al. does not translate to the asynchronous setting without lowering the corruption threshold or using more rounds in each recursive step.

This is resolved by attaching justifiers to all messages: forcing the adversary to choose between being ignored by the honest parties, or sending messages with certain validity properties. Using these we define validated proxcensus and show that it can be instantiated in asynchrony with the same recursive structure and round complexity as synchronous proxcensus. In asynchrony the extraction phase incurs a security loss of one bit which is recovered by expanding to twice as many grades using an extra round of communication. This results in a $\lambda+2$ round VABA and a $\lambda+3$ round BA, both with $2^{-\lambda}$ error probability and communication complexity matching Fitzi et al.
Expand
Karthikeyan Bhargavan, Maxime Buyse, Lucas Franceschino, Lasse Letager Hansen, Franziskus Kiefer, Jonas Schneider-Bensch, Bas Spitters
ePrint Report ePrint Report
We present hax, a verification toolchain for Rust targeted at security-critical software such as cryptographic libraries, protocol imple- mentations, authentication and authorization mechanisms, and parsing and sanitization code. The key idea behind hax is the pragmatic observation that different verification tools are better at handling different kinds of verification goals. Consequently, hax supports multiple proof backends, including domain-specific security analysis tools like ProVerif and SSProve, as well as general proof assistants like Coq and F*. In this paper, we present the hax toolchain and show how we use it to translate Rust code to the input languages of different provers. We describe how we systematically test our translated models and our models of the Rust system libraries to gain confidence in their correctness. Finally, we briefly overview various ongoing verification projects that rely on hax.
Expand
Nico Döttling, Jesko Dujmovic, Antoine Joux
ePrint Report ePrint Report
Timed cryptography has initiated a paradigm shift in the design of cryptographic protocols: Using timed cryptography we can realize tasks fairly, which is provably out of range of standard cryptographic concepts. To a certain degree, the success of timed cryptography is rooted in the existence of efficient protocols based on the sequential squaring assumption.

In this work, we consider space analogues of timed cryptographic primitives, which we refer to as space-hard primitives. Roughly speaking, these notions require honest protocol parties to invest a certain amount of space and provide security against space constrained adversaries. While inefficient generic constructions of timed-primitives from strong assumptions such as indistinguishability obfuscation can be adapted to the space-hard setting, we currently lack concrete and versatile algebraically structured assumptions for space-hard cryptography. In this work, we initiate the study of space-hard primitives from concrete algebraic assumptions relating to the problem of root-finding of sparse polynomials. Our motivation to study this problem is a candidate construction of VDFs by Boneh et al. (CRYPTO 2018) which are based on the hardness of inverting permutation polynomials. Somewhat anticlimactically, our first contribution is a full break of this candidate. However, we then revise this hardness assumption by dropping the permutation requirement and considering arbitrary sparse high degree polynomials. We argue that this type of assumption is much better suited for space-hardness rather than timed cryptography. We then proceed to construct both space-lock puzzles and verifiable space-hard functions from this assumption.
Expand
Yevgeniy Dodis, Jiaxin Guan, Peter Hall, Alison Lin
ePrint Report ePrint Report
Everlasting (EL) privacy offers an attractive solution to the Store-Now-Decrypt-Later (SNDL) problem, where future increases in the attacker's capability could break systems which are believed to be secure today. Instead of requiring full information-theoretic security, everlasting privacy allows computationally-secure transmissions of ephemeral secrets, which are only "effective" for a limited periods of time, after which their compromise is provably useless for the SNDL attacker.

In this work we revisit such everlasting privacy model of Dodis and Yeo (ITC'21), which we call Hypervisor EverLasting Privacy (HELP). HELP is a novel architecture for generating shared randomness using a network of semi-trusted servers (or "hypervisors"), trading the need to store/distribute large shared secrets with the assumptions that it is hard to: (a) simultaneously compromise too many publicly accessible ad-hoc servers; and (b) break a computationally-secure encryption scheme very quickly. While Dodis and Yeo presented good HELP solutions in the asymptotic sense, their solutions were concretely expensive and used heavy tools (like large finite fields or gigantic Toeplitz matrices).

We abstract and generalize the HELP architecture to allow for more efficient instantiations, and construct several concretely efficient HELP solutions. Our solutions use elementary cryptographic operations, such as hashing and message authentication. We also prove a very strong composition theorem showing that our EL architecture can use any message transmission method which is computationally-secure in the Universal Composability (UC) framework. This is the first positive composition result for everlasting privacy, which was otherwise known to suffer from many "non-composition" results (Müller-Quade and Unruh; J of Cryptology'10).
Expand

29 January 2025

Munich, Germany, 25 June -
Event Calendar Event Calendar
Event date: 25 June to
Submission deadline: 31 March 2025
Notification: 30 April 2025
Expand
Cambridge, USA, 18 April 2025
Event Calendar Event Calendar
Event date: 18 April 2025
Submission deadline: 10 February 2025
Expand
INSA Lyon, CITI Lab (Villeurbanne, France)
Job Posting Job Posting

The CITI Lab at INSA Lyon, France, is seeking a motivated PhD student to engage in pioneering research in frugal cryptography.

The research project focuses on designing and analyzing cryptographic primitives, evaluating their energy consumption in various contexts such as Internet communication and Machine Learning. The PhD candidate will also develop generic tools and methodologies to assess the energy impact of cryptographic implementations. The work aims to create secure and efficient cryptographic solutions adapted to the needs of a digital and sustainable future.

This fully funded position has a 3-year duration, with a negotiable start date.

Responsibilities

  • Collaborate with faculty and researchers to design innovative cryptographic protocols.
  • Publish research findings in leading computer science conferences and journals.
  • Participate in academic activities, including seminars, workshops, and conferences, to stay updated on advancements in the field.
  • Potentially assist in teaching duties as a teaching assistant (TA).

Requirements

  • A strong background in cryptography, with an MSc in Computer Science, Engineering, Mathematics, or a related discipline (preferred but not mandatory).
  • Excellent communication and interpersonal skills, with the ability to thrive in a collaborative research environment.
  • Strong organizational and time-management abilities to balance research, coursework, and teaching responsibilities.
  • Critical thinking and analytical skills, with fluency in technical English.
  • Proficiency in programming.

To Apply: Please submit your CV along with transcripts from both your Bachelor’s and Master’s degrees.

Closing date for applications:

Contact: Clementine Gritti (clementine.gritt(at)insa-lyon.fr)

Expand

28 January 2025

Rabiah Alnashwan, Benjamin Dowling, Bhagya Wimalasiri
ePrint Report ePrint Report
The rise of 5G and IoT has shifted secure communication from centralized and homogeneous to a landscape of heterogeneous mobile devices constantly travelling between myriad networks. In such environments, it is desirable for devices to securely extend their connection from one network to another, often referred to as a handover. In this work we introduce the first cryptographic formalisation of secure handover schemes. We leverage our formalisation to propose path privacy, a novel security property for handovers that has hitherto remained unexplored. We further develop a syntax for secure handovers, and identify security properties appropriate for secure handover schemes. Finally, we introduce a generic handover scheme that captures all the strong notions of security we have identified, combining our novel path privacy concept with other security properties characteristic to existing handover schemes, demonstrating the robustness and versatility of our framework.
Expand
Jeremiah Blocki, Seunghoon Lee
ePrint Report ePrint Report
In modern cryptography, relatively few instantiations of foundational cryptographic primitives are used across most cryptographic protocols. For example, elliptic curve groups are typically instantiated using P-256, P-384, Curve25519, or Curve448, while block ciphers are commonly instantiated with AES, and hash functions with SHA-2, SHA-3, or SHAKE. This limited diversity raises concerns that an adversary with nation-state-level resources could perform a preprocessing attack, generating a hint that might later be exploited to break protocols built on these primitives. It is often notoriously challenging to analyze and upper bound the advantage of a preprocessing attacker even if we assume that we have idealized instantiations of our cryptographic primitives (ideal permutations, ideal ciphers, random oracles, generic groups). Coretti et al. (CRYPTO/EUROCRYPT'18) demonstrated a powerful framework to simplify the analysis of preprocessing attacks, but they only proved that their framework applies when the cryptographic protocol uses a single idealized primitive. In practice, however, cryptographic constructions often utilize multiple different cryptographic primitives.

We verify that Coretti et al. (CRYPTO/EUROCRYPT'18)'s framework extends to settings with multiple idealized primitives and we apply this framework to analyze the multi-user security of (short) Schnorr Signatures and the CCA-security of PSEC-KEM against pre-processing attackers in the Random Oracle Model (ROM) plus the Generic Group Model (GGM). Prior work of Blocki and Lee (EUROCRYPT'22) used complicated compression arguments to analyze the security of {\em key-prefixed} short Schnorr signatures where the random oracle is salted with the user's public key. However, the security analysis did not extend to standardized implementations of Schnorr Signatures (e.g., BSI-TR-03111 or ISO/IEC 14888-3) which do not adopt key-prefixing, but take other measures to protect against preprocessing attacks by disallowing signatures that use a preimage of $0$. Blocki and Lee (EUROCRYPT'22) left the (in)security of such "nonzero Schnorr Signature" constructions as an open question. We fully resolve this open question demonstrating that (short) nonzero Schnorr Signatures are also secure against preprocessing attacks. We also analyze PSEC-KEM in the ROM+GGM demonstrating that this Key Encapsulation Mechanism (KEM) is CPA-secure against preprocessing attacks.
Expand
Jonas Bertels, Hilder V. L. Pereira, Ingrid Verbauwhede
ePrint Report ePrint Report
This work showcases Quatorze-bis, a state-of-the-art Number Theoretic Transform circuit for TFHE-like cryptosystems on FPGAs. It contains a novel modular multiplication design for modular multiplication with a constant for a constant modulus. This modular multiplication design does not require any DSP units or any dedicated multiplier unit, nor does it require extra logic whencompared to the state-of-the-art modular multipliers. Furthermore, we present an implementation of a constant multiplier Number Theoretic Transform design for TFHE-like schemes. Lastly, we use this Number Theoretic Transform design to implement a FINAL hardware accelerator for the AMD Alveo U55c which improves the Throughput metric of TFHE-like cryptosystems on FPGAs by a factor 9.28x over Li et al.'s NFP CHES 2024 accelerator and by 10-25% over the absolute state-of-the-art design FPT while using one third of FPTs DSPs.
Expand
Pierrick Gaudry, Julien Soumier, Pierre-Jean Spaenlehauer
ePrint Report ePrint Report
We investigate the algorithmic problem of computing isomorphisms between products of supersingular elliptic curves, given their endomorphism rings. This computational problem seems to be difficult when the domain and codomain are fixed, whereas we provide efficient algorithms to compute isomorphisms when part of the codomain is built during the construction. We propose an authentication protocol whose security relies on this asymmetry. Its most prominent feature is that the endomorphism rings of the elliptic curves are not hidden. Furthermore, it does not require a trusted setup.
Expand
Andrea Basso, Giacomo Borin, Wouter Castryck, Maria Corte-Real Santos, Riccardo Invernizzi, Antonin Leroux, Luciano Maino, Frederik Vercauteren, Benjamin Wesolowski
ePrint Report ePrint Report
The problem of computing an isogeny of large prime degree from a supersingular elliptic curve of unknown endomorphism ring is assumed to be hard both for classical as well as quantum computers. In this work, we first build a two-round identification protocol whose security reduces to this problem. The challenge consists of a random large prime $q$ and the prover simply replies with an efficient representation of an isogeny of degree $q$ from its public key. Using the hash-and-sign paradigm, we then derive a signature scheme with a very simple and flexible signing procedure and prove its security in the standard model. Our optimized C implementation of the signature scheme shows that signing is roughly $1.8\times$ faster than all SQIsign variants, whereas verification is $1.4\times$ times slower. The sizes of the public key and signature are comparable to existing schemes.
Expand
Minghang Li, Qianhong Wu, Zhipeng Wang, Bo Qin, Bohang Wei, Hang Ruan, Shihong Xiong, Zhenyang Ding
ePrint Report ePrint Report
BFT protocols usually have a waterfall-like degradation in performance in the face of crash faults. Some BFT protocols may not experience sudden performance degradation under crash faults. They achieve this at the expense of increased communication and round complexity in fault-free scenarios. In a nutshell, existing protocols lack the adaptability needed to perform optimally under varying conditions.

We propose TockOwl, the first asynchronous consensus protocol with fault adaptability. TockOwl features quadratic communication and constant round complexity, allowing it to remain efficient in fault-free scenarios. TockOwl also possesses crash robustness, enabling it to maintain stable performance when facing crash faults. These properties collectively ensure the fault adaptability of TockOwl.

Furthermore, we propose TockOwl+ that has network adaptability. TockOwl+ incorporates both fast and slow tracks and employs hedging delays, allowing it to achieve low latency comparable to partially synchronous protocols without waiting for timeouts in asynchronous environments. Compared to the latest dual-track protocols, the slow track of TockOwl+ is simpler, implying shorter latency in fully asynchronous environments.
Expand
Henry Bambury, Phong Q. Nguyen
ePrint Report ePrint Report
We present a key-recovery attack on DEFI, an efficient signature scheme proposed recently by Feussner and Semaev, and based on isotropic quadratic forms, borrowing from both multivariate and lattice cryptography. Our lattice-based attack is partially heuristic, but works on all proposed parameters: experimentally, it recovers the secret key in a few minutes, using less than ten (message,signature) pairs.
Expand
Ryan Lehmkuhl, Alexandra Henzinger, Henry Corrigan-Gibbs
ePrint Report ePrint Report
A private-information-retrieval (PIR) scheme lets a client fetch a record from a remote database without revealing which record it fetched. Classic PIR schemes treat all database records the same but, in practice, some database records are much more popular (i.e., commonly fetched) than others. We introduce distributional private information retrieval, a new type of PIR that can run faster than classic PIR–both asymptotically and concretely–when the popularity distribution is heavily skewed. Distributional PIR provides exactly the same cryptographic privacy as classic PIR. The speedup comes from a relaxed form of correctness: distributional PIR guarantees that in-distribution queries succeed with good probability, while out-of-distribution queries succeed with lower probability.

We construct a distributional-PIR scheme that makes black-box use of classic PIR protocols, and prove a lower bound on the server-runtime of a large class of distributional-PIR schemes. On two real-world popularity distributions, our distributional-PIR construction reduces compute costs by $5$-$77\times$ compared to existing techniques. Finally, we build CrowdSurf, an end-to-end system for privately fetching tweets, and show that distributional-PIR reduces the end-to-end server cost by $8\times$.
Expand
Xavier Bultel, Charles Olivier-Anclin
ePrint Report ePrint Report
Security models provide a way of formalising security properties in a rigorous way, but it is sometimes difficult to ensure that the model really fits the concept that we are trying to formalise. In this paper, we illustrate this fact by showing the discrepancies between the security model of anonymity of linkable ring signatures and the security that is actually expected for this kind of signature. These signatures allow a user to sign anonymously within an ad hoc group generated from the public keys of the group members, but all their signatures can be linked together. Reading the related literature, it seems obvious that users' identities must remain hidden even when their signatures are linked, but we show that, surprisingly, almost none have adopted a security model that guarantees it. We illustrate this by presenting two counter-examples which are secure in most anonymity model of linkable ring signatures, but which trivially leak a signer's identity after only two signatures.

A natural fix to this model, already introduced in some previous work, is proposed in a corruption model where the attacker can generate the keys of certain users themselves, which seems much more coherent in a context where the group of users can be constructed in an ad hoc way at the time of signing. We believe that these two changes make the security model more realistic. Indeed, within the framework of this model, our counter-examples becomes insecure. Furthermore, we show that most of the schemes in the literature we surveyed appear to have been designed to achieve the security guaranteed by the latest model, which reinforces the idea that the model is closer to the informal intuition of what anonymity should be in linkable ring signatures.
Expand
Neekon Vafa, Vinod Vaikuntanathan
ePrint Report ePrint Report
The symmetric binary perceptron ($\mathrm{SBP}_{\kappa}$) problem with parameter $\kappa : \mathbb{R}_{\geq1} \to [0,1]$ is an average-case search problem defined as follows: given a random Gaussian matrix $\mathbf{A} \sim \mathcal{N}(0,1)^{n \times m}$ as input where $m \geq n$, output a vector $\mathbf{x} \in \{-1,1\}^m$ such that $$|| \mathbf{A} \mathbf{x} ||_{\infty} \leq \kappa(m/n) \cdot \sqrt{m}~.$$ The number partitioning problem ($\mathrm{NPP}_{\kappa}$) corresponds to the special case of setting $n=1$. There is considerable evidence that both problems exhibit large computational-statistical gaps. In this work, we show (nearly) tight average-case hardness for these problems, assuming the worst-case hardness of standard approximate shortest vector problems on lattices.

For $\mathrm{SBP}_\kappa$, statistically, solutions exist with $\kappa(x) = 2^{-\Theta(x)}$ (Aubin, Perkins and Zdeborova, Journal of Physics 2019). For large $n$, the best that efficient algorithms have been able to achieve is a far cry from the statistical bound, namely $\kappa(x) = \Theta(1/\sqrt{x})$ (Bansal and Spencer, Random Structures and Algorithms 2020). The problem has been extensively studied in the TCS and statistics communities, and Gamarnik, Kizildag, Perkins and Xu (FOCS 2022) conjecture that Bansal-Spencer is tight: namely, $\kappa(x) = \widetilde{\Theta}(1/\sqrt{x})$ is the optimal value achieved by computationally efficient algorithms. We prove their conjecture assuming the worst-case hardness of approximating the shortest vector problem on lattices.

For $\mathrm{NPP}_\kappa$, statistically, solutions exist with $\kappa(m) = \Theta(2^{-m})$ (Karmarkar, Karp, Lueker and Odlyzko, Journal of Applied Probability 1986). Karmarkar and Karp's classical differencing algorithm achieves $\kappa(m) = 2^{-O(\log^2 m)}~.$ We prove that Karmarkar-Karp is nearly tight: namely, no polynomial-time algorithm can achieve $\kappa(m) = 2^{-\Omega(\log^3 m)}$, once again assuming the worst-case subexponential hardness of approximating the shortest vector problem on lattices to within a subexponential factor.

Our hardness results are versatile, and hold with respect to different distributions of the matrix $\mathbf{A}$ (e.g., i.i.d. uniform entries from $[0,1]$) and weaker requirements on the solution vector $\mathbf{x}$.
Expand
Benedikt Bünz, Tushar Mopuri, Alireza Shirzad, Sriram Sridhar
ePrint Report ePrint Report
We construct the first polynomial commitment scheme (PCS) that has a transparent setup, quasi-linear prover time, $\log N$ verifier time, and $\log \log N$ proof size, for multilinear polynomials of size $N$. Concretely, we have the smallest proof size amongst transparent PCS, with proof size less than $4.5$KB for $N\leq 2^{30}$. We prove that our scheme is secure entirely under falsifiable assumptions about groups of unknown order. The scheme significantly improves on the prior work of Dew (PKC 2023), which has super-cubic prover time and relies on the Generic Group Model (a non-falsifiable assumption). Along the way, we make several contributions that are of independent interest: PoKEMath, a protocol for efficiently proving that an arbitrary predicate over committed integer vectors holds; SIPA, a bulletproofs-style inner product argument in groups of unknown order; we also distill out what prior work required from the Generic Group Model and frame this as a falsifiable assumption.
Expand
◄ Previous Next ►