International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

16 February 2024

Yilei Chen, Xinyu Mao
ePrint Report ePrint Report
Universal computational extractors (UCEs), introduced by Bellare, Hoang, and Keelveedhi [BHK13], can securely replace random oracles in various applications, including KDM-secure encryption, deterministic encryption, RSA-OAEP, etc. Despite its usefulness, constructing UCE in the standard model is challenging. The only known positive result is given by Brzuska and Mittelbach [BM14], who construct UCE with strongly computationally unpredictable one-query source assuming indistinguishability obfuscation (iO) and the existence of point obfuscators with auxiliary input (AIPO); they also construct UCE with $q$-query sources assuming iO and composable AIPO. On the other hand, Brzuska, Farshim, and Mittelbach [BFM14] show that the most potent version of UCE does not exist, assuming the existence of iO.

In this paper, we construct UCE with strongly computationally unpredictable one-query sources from lattice assumptions based on the GGH15 encodings [GGH15], without using iO. Security is proven under the following assumptions: (1) LWE with subexponential hardness; (2) evasive LWE, which is a new assumption proposed by Wee [Wee22]; (3) the existence of AIPO in NC1. Our UCE directly implies a universal hardcore function that outputs a polynomial number of bits, giving the first lattice-based universal hardcore function without using iO. We also put forth a new primitive called obliviously programmable function as an intermediate abstraction; it makes our analysis more modularized and could be of independent interest
Expand
Nir Bitansky, Nathan Geier
ePrint Report ePrint Report
In an (α,β)-weak non-interactive zero knowledge (NIZK), the soundness error is at most α and the zero-knowledge error is at most β. Goyal, Jain, and Sahai (CRYPTO 2019) show that if α+β<1 for some constants α,β, then (α,β)-weak NIZK can be turned into fully-secure NIZK, assuming sub-exponentially-secure public-key encryption. We revisit the problem of NIZK amplification: – We amplify NIZK arguments assuming only polynomially-secure public-key encryption, for any constants α+β<1. – We amplify NIZK proofs assuming only one-way functions, for any constants α+β<1. – When the soundness error α is negligible to begin with, we can also amplify NIZK arguments assuming only one-way functions. Our results are based on the hidden-bits paradigm, and can be viewed as a reduction from NIZK amplification to the better understood problem of pseudorandomness amplification.
Expand
Sri Aravinda Krishnan Thyagarajan, Ke Wu, Pratik Soni
ePrint Report ePrint Report
Cleve's celebrated result (STOC'86) showed that a strongly fair multi-party coin-toss is impossible in the presence of majority-sized coalitions. Recently, however, a fascinating line of work studied a relaxed fairness notion called \emph{game-theoretic fairness}, which guarantees that no coalition should be incentivized to deviate from the prescribed protocol. A sequence of works has explored the feasibility of game-theoretic fairness for \emph{two-sided} coin-toss, and indeed demonstrated feasibility in the dishonest majority setting under standard cryptographic assumptions. In fact, the recent work of Wu, Asharov, and Shi (EUROCRYPT'22) completely characterized the regime where game-theoretic fairness is feasible. However, this line of work is largely restricted to two-sided coin-toss, and more precisely on a \emph{uniform} coin-toss (i.e., Bernoulli with parameter $1/2$). The only exceptions are the works on game-theoretically fair leader election, which can be viewed as a special case of uniform $n$-sided coin-toss where $n$ is the number of parties.

In this work, we \emph{initiate} the comprehensive study of game-theoretic fairness for multi-party \emph{sampling from general distributions}. In particular, for the case of $m$-sided \emph{uniform} coin-toss we give a nearly complete characterization of the regime in which game-theoretic fairness is feasible. Interestingly, contrary to standard fairness notions in cryptography, the composition of game-theoretically fair two-sided coin-toss protocols does not necessarily yield game-theoretically fair multi-sided coins. To circumvent this, we introduce new techniques compatible with game-theoretic fairness. In particular, we give the following results:

- We give a protocol from standard cryptographic assumptions that achieves game-theoretic fairness for uniform $m$-sided coin-toss against half- or more-sized adversarial coalitions. - To complement our protocol, we give a general impossibility result that establishes the optimality of our protocol for a broad range of parameters modulo an additive constant. Even in the worst-case, the gap between our protocol and our impossibility result is only a small constant multiplicative factor.

- We also present a game-theoretically fair protocol for \emph{any} efficiently sampleable $m$-outcome distribution in the dishonest majority setting. For instance, even for the case of $m=2$ (i.e., two-sided coin-toss), our result implies a game-theoretically fair protocol for an \emph{arbitrary} Bernoulli coin. In contrast, the work of Wu, Asharov, and Shi only focussed on a Bernoulli coin with parameter $1/2$.
Expand
Clémence Chevignard, Pierre-Alain Fouque, André Schrottenloher
ePrint Report ePrint Report
This paper focuses on the optimization of the number of logical qubits in Shor's quantum factoring algorithm. As in previous works, we target the implementation of the modular exponentiation, which is the most costly component of the algorithm, both in qubits and operations.

In this paper, we show that using only $o(n)$ work qubits, one can obtain the first bit of the modular exponentiation output. We combine this result with May and Schlieper's truncation technique (ToSC 2022) and the Ekerå-Håstad variant of Shor's algorithm (PQCrypto 2017) to obtain a quantum factoring algorithm requiring only $n/2 + o(n)$ qubits in the case of an $n$-bit RSA modulus, while current envisioned implementations require about $2n$ qubits.

Our algorithm uses a Residue Number System and succeeds with a parametrizable probability. Being completely classical, we have implemented and tested it. Among possible trade-offs, we can reach a gate count $\mathcal{O}(n^3)$ for a depth $\mathcal{O}(n^2 \log^3 n)$, which then has to be multiplied by $\mathcal{O}(\log n)$ (the number of measurement results required by Ekerå-Håstad). Preliminary logical resource estimates suggest that this circuit could be engineered to use less than 1700 qubits and $2^{36}$ Toffoli gates, and require 60 independent runs to factor an RSA-2048 instance.
Expand
Dimitris Mouris, Christopher Patton, Hannah Davis, Pratik Sarkar, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
Insight into user experience and behavior is critical to the success of large software systems and web services. Yet gaining such insights, while preserving user privacy, is a significant challenge. Recent advancements in multi-party computation have made it practical to compute verifiable aggregates over secret shared data. One important use case for these protocols is heavy hitters, where the servers compute the most popular inputs held by the users without learning the inputs themselves. The Poplar protocol (IEEE S&P 2021) focuses on this use case, but cannot support other aggregation tasks. Another such protocol, Prio (NSDI 2017), supports a wider variety of statistics but is unsuitable for heavy hitters.

We introduce Mastic, a flexible protocol for private and verifiable general-purpose statistics based on function secret sharing and zero-knowledge proofs on secret shared data. Mastic is the first to solve the more general problem of weighted heavy-hitters, enabling new use cases, not supported by Prio or Poplar. In addition, Mastic allows grouping general-purpose metrics by user attributes, such as their geographic location or browser version, without sacrificing privacy or incurring high-performance costs, which is a major improvement over Prio. We demonstrate Mastic's benefits with two real-world applications, private network error logging and browser telemetry, and compare our protocol with Prio and Poplar on a wide area network. Overall, we report over one order of magnitude performance improvement over Poplar for heavy hitters and $1.5-2\times$ improvement over Prio for attribute-based metrics.
Expand
John Preuß Mattsson
ePrint Report ePrint Report
One-way key chains or ratchets play a vital role in numerous important security protocols such as TLS 1.3, QUIC, Signal, MLS, EDHOC, and OSCORE. Despite the crucial role they play, very little is known about their security properties. This paper categorizes and examines different key chain constructions, offering a comprehensive overview of their security. Our analysis reveals notable distinctions among the number of collisions occurring within chains, between chains, and between a chain and a random set. Notably, the type of key chain used in protocols such as TLS 1.3 and Signal exhibit a significant number of weak keys, an unexpectedly high rate of key collisions surpassing birthday attack expectations, and a predictable shrinking key space susceptible to novel Time-Memory Trade-Off (TMTO) attacks with complexity $\le N^{1/3}$, which is well within the capabilities of current supercomputers and distributed systems. Consequently, the security level provided by e.g., TLS 1.3 is significantly lower than anticipated based on key sizes. To address these concerns, we analyze the aforementioned protocols and provide numerous concrete recommendations for enhancing their security, as well as guidance for future security protocol design.
Expand
Pierre Pébereau
ePrint Report ePrint Report
In this work, we study the singular locus of the varieties defined by the public keys of UOV and VOX, two multivariate quadratic signature schemes submitted to the additional NIST call for signature schemes. Singular points do not exist for generic quadratic systems, which enables us to introduce two new algebraic attacks against UOV-based schemes. We show that they can be seen as an algebraic variant of the Kipnis-Shamir attack, which can be obtained in our framework as an enumerative approach of solving a bihomogeneous modeling of the computation of singular points. This allows us to highlight some heuristics implicitly relied on by the Kipnis-Shamir attack.

We give new attacks for UOV$^{\hat +}$ and VOX targeting singular points of the public key equations. Our attacks lower the security of the schemes, both asymptotically and in number of gates, showing in particular that the parameters sets proposed for these schemes do not meet the NIST security requirements. More precisely, we show that the security of UOV$^{\hat +}$ was overestimated by factors $2^{22}, 2^{36}, 2^{59}$ for security levels $I, III, V$ respectively. We conclude the attack on VOX by showing that an attacker can perform a full key recovery from one vector obtained in the previous attacks.
Expand
Mustafa Khairallah, Srinivasan Yadhunathan, Shivam Bhasin
ePrint Report ePrint Report
In this paper, we propose a leakage-resilient pseudo-random number generator (PRNG) design that leverages the rekeying techniques of the PSV-Enc encryption scheme and the superposition property of the Superposition-Tweak-Key (STK) framework. The random seed of the PRNG is divided into two parts; one part is used as an ephemeral key that changes every two calls to a tweakable block cipher (TBC), and the other part is used as a static long-term key. Using the superposition property, we show that it is possible to eliminate observable leakage by only masking the static key. Thus, our proposal itself can be seen as a superposition of masking and rekeying. We show that our observations can be used to design an unpredictable-with-leakage PRNG as long as the static key is protected, and the ephemeral key cannot be attacked with 2 traces. Our construction enjoys better theoretical security arguments than PSV-Enc; better Time-Data trade-off and leakage assumptions, using the recently popularized unpredictability with leakage. We verify our proposal by performing Test Vector Leakage Assessment (TVLA) on an STK-based TBC (\deoxys) operated with a fixed key and a dynamic random tweak. Our results show that while the protection of the static key is non-trivial, it only requires $\approx 10\%$ overhead for first-order protection in the most conservative setting, unlike traditional masking which may require significant overheads of $300\%$ or more.
Expand
David Du Pont, Jonas Bertels, Furkan Turan, Michiel Van Beirendonck, Ingrid Verbauwhede
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) enables computation on encrypted data, holding immense potential for enhancing data privacy and security in various applications. Presently, FHE adoption is hindered by slow computation times, caused by data being encrypted into large polynomials. Optimized FHE libraries and hardware acceleration are emerging to tackle this performance bottleneck. Often, these libraries implement the Number Theoretic Transform (NTT) algorithm for efficient polynomial multiplication. Existing implementations mostly focus on the case where the polynomials are defined over a power-of-two cyclotomic ring, allowing to make use of the simpler Cooley-Tukey NTT. However, generalized cyclotomics have several benefits in the BGV FHE scheme, including more SIMD plaintext slots and a simpler bootstrapping algorithm.

We present a hardware architecture for the NTT targeting generalized cyclotomics within the context of the BGV FHE scheme. We explore different non-power-of-two NTT algorithms, including the Prime-Factor, Rader, and Bluestein NTTs. Our most efficient architecture targets the 21845-th cyclotomic polynomial --- a practical parameter for BGV --- with ideal properties for use with a combination of the Prime-Factor and Rader algorithms. The design achieves high throughput with optimized resource utilization, by leveraging parallel processing, pipelining, and reusing processing elements. Compared to Wu et al.'s VLSI architecture of the Bluestein NTT, our approach showcases 2$\times$ to 5$\times$ improved throughput and area efficiency. Simulation and implementation results on an AMD Alveo U250 FPGA demonstrate the feasibility of the proposed hardware design for FHE.
Expand
Pedro Branco, Nico Döttling, Akshayaram Srinivasan, Riccardo Zanotto
ePrint Report ePrint Report
Somewhere statistically binding (SSB) hashing allows us to sample a special hashing key such that the digest statistically binds the input at $m$ secret locations. This hash function is said to be somewhere extractable (SE) if there is an additional trapdoor that allows the extraction of the input bits at the $m$ locations from the digest.

Devadas, Goyal, Kalai, and Vaikuntanathan (FOCS 2022) introduced a variant of somewhere extractable hashing called rate-1 fully local SE hash functions. The rate-1 requirement states that the size of the digest is $m + \mathsf{poly}(\lambda)$ (where $\lambda$ is the security parameter). The fully local property requires that for any index $i$, there is a "very short" opening showing that $i$-th bit of the hashed input is equal to $b$ for some $b \in \{0,1\}$. The size of this opening is required to be independent of $m$ and in particular, this means that its size is independent of the size of the digest. Devadas et al. gave such a construction from Learning with Errors (LWE).

In this work, we give a construction of a rate-1 fully local somewhere extractable hash function from Decisional Diffie-Hellman (DDH) and BARGs. Under the same assumptions, we give constructions of rate-1 BARG and RAM SNARG with partial input soundness whose proof sizes are only matched by prior constructions based on LWE.
Expand
Alexander Bienstock, Sarvar Patel, Joon Young Seo, Kevin Yeo
ePrint Report ePrint Report
In this paper, we study two problems: oblivious compression and decompression of ciphertexts. In oblivious compression, a server holds a set of ciphertexts with a subset of encryptions of zeroes whose positions are only known to the client. The goal is for the server to effectively compress the ciphertexts obliviously, while preserving the non-zero plaintexts and without learning the plaintext values. For oblivious decompression, the client, instead, succinctly encodes a sequence of plaintexts such that the server may decode encryptions of all plaintexts value, but the zeroes may be replaced with arbitrary values. We present solutions to both problems that construct lossless compressions only 5% more than the optimal minimum using only additive homomorphism. The crux of both algorithms involve embedding ciphertexts as random linear systems that are efficiently solvable.

Using our compression schemes, we obtain state-of-the-art schemes for batch private information retrieval (PIR) where a client wishes to privately retrieve multiple entries from a server-held database in one query. We show that our compression schemes may be used to reduce communication by up to 30% for batch PIR in both the single- and two-server settings.

Additionally, we study labeled private set intersection (PSI) in the unbalanced setting where one party's set is significantly smaller than the other party's set and each entry has associated data. By utilizing our novel compression algorithm, we present a protocol with 65-88% reduction in communication with comparable computation compared to prior works.
Expand
Michele Battagliola, Andrea Flamini
ePrint Report ePrint Report
The recent surge of distribute technologies caused an increasing interest towards threshold signature protocols, that peaked with the recent NIST First Call for Multi-Party Threshold Schemes.

Since its introduction, the Fiat-Shamir Transform has been the most popular way to design standard digital signature schemes. In this work, we translate the Fiat-Shamir Transform into a multi-party setting, building a framework that seeks to be an alternative, easier, way to design threshold digital signatures. We do that by introducing the concept of threshold identification scheme and threshold sigma protocol, and showing necessary and sufficient conditions to prove the security of the threshold signature schemes derived from them.

Lastly, we show a practical application of our framework providing an alternative security proof for Sparkle, a recent threshold Schnorr signature. In particular, we consider the threshold identification scheme underlying Sparkle and prove the security of the signature derived from it.

We show that using our framework the effort required to prove the security of threshold signatures might be drastically lowered. In fact, instead of reducing explicitly its security to the security of a hard problem, it is enough to prove some properties of the underlying threshold sigma protocol and threshold identification scheme. Then, by applying the results that we prove in this paper it is guaranteed that the derived threshold signature is secure.
Expand
Charlotte Lefevre
ePrint Report ePrint Report
This note examines a nuance in the methods employed for counting the adversarial online complexity in the security proofs of duplex-based modes, with a focus on authenticated encryption. A recent study by Gilbert et al., reveals an attack on a broad class of duplex-based authenticated encryption modes. In particular, their approach to quantifying the adversarial online complexity, which capture realistic attack scenarios, includes certain queries in the count which are not in the security proofs. This note analyzes these differences and concludes that the attack of Gilbert et al, for certain parameter choices, matches the security bound.
Expand
Elijah Pelofske
ePrint Report ePrint Report
Quantum devices offer a highly useful function - that is generating random numbers in a non-deterministic way since the measurement of a quantum state is not deterministic. This means that quantum devices can be constructed that generate qubits in a uniform superposition and then measure the state of those qubits. If the preparation of the qubits in a uniform superposition is unbiased, then quantum computers can be used to create high entropy, secure random numbers. Typically, preparing and measuring such quantum systems requires more time compared to classical pseudo random number generators (PRNGs) which are inherently deterministic algorithms. Therefore, the typical use of quantum random number generators (QRNGs) is to provide high entropy secure seeds for PRNGs. Quantum annealing (QA) is a type of analog quantum computation that is a relaxed form of adiabatic quantum computation and uses quantum fluctuations in order to search for ground state solutions of a programmable Ising model. Here we present extensive experimental random number results from a D-Wave 2000Q quantum annealer, totaling over 20 billion bits of QA measurements, which is significantly larger than previous D-Wave QA random number generator studies. Current quantum annealers are susceptible to noise from environmental sources and calibration errors, and are not in general unbiased samplers. Therefore, it is of interest to quantify whether noisy quantum annealers can effectively function as an unbiased QRNG. The amount of data that was collected from the quantum annealer allows a comprehensive analysis of the random bits to be performed using the NIST SP 800-22 Rev 1a testsuite, as well as min-entropy estimates from NIST SP 800-90B. The randomness tests show that the generated random bits from the D-Wave 2000Q are biased, and not unpredictable random bit sequences. With no server-side sampling post-processing, the $1$ microsecond annealing time measurements had a min-entropy of $0.824$.
Expand
Tao Zhang, Shang Shi, Md Habibur Rahman, Nitin Varshney, Akshay Kulkarni, Farimah Farahmandi, Mark Tehranipoor
ePrint Report ePrint Report
Battery-operated applications have been ubiquitous all over the world ranging from power-intensive electric cars down to low-power smart terminals and embedded devices. Meanwhile, serious incidents around batteries such as swelling, fire, and explosion have been witnessed, which resulted in horribly huge financial and even life loss. People used to attribute such aftermaths to unintentional design mistakes or insufficient quality inspection of original battery manufacturers. However, this is not fair anymore today given the convoluted battery supply chain and the extended cyber-physical attack surface of battery management systems (BMS). In this paper, we will focus on the authenticity and assurance of prevalent (Li-ion) battery instances. We look into battery authenticity by modeling the contemporary battery supply chain and discussing practical concerns such as rewrapping and recycling in-depth at each stage. As for battery assurance, we consider emerging attack vectors that can compromise the confidentiality, integrity, and availability of the microelectronic BMS. Besides, real-world attack examples are highlighted to reflect the capabilities of advanced adversaries. Moreover, promising countermeasures regarding the detection and avoidance of threats on both battery authenticity and assurance are presented, such that researchers will gain insights into how the problem could be addressed/alleviated. We also provide our perspectives on the vulnerabilities of battery systems and their consequent impacts as well as our point of view on potential countermeasure techniques.
Expand

13 February 2024

Warszawa, Polska, 20 June - 21 June 2024
Event Calendar Event Calendar
Event date: 20 June to 21 June 2024
Submission deadline: 8 April 2024
Notification: 6 May 2024
Expand
IFT | Logos | Vac
Job Posting Job Posting
Vac builds public good protocols for the decentralized web. We do applied research based on which we build protocols, libraries and publications.

This role is within the Vac Nescience unit, which develops Nescience A zkVM leveraging hiding properties.

The role

In this role, you will be responsible implementing and analysing components of zero knowledge argument systems and architectures for private computation. The ideal candidate should be well-versed in zero-knowledge circuits written in Rust, with the ability to adapt to evolving research needs.

Your responsibilities will include implementing zero-knowledge circuits and writing comprehensive specifications. Additionally, your role will involve measuring the performance of circuits, while also possessing the skills to debug and optimize as needed.

Join us in pushing the boundaries of private computation technology and contribute to groundbreaking advancements in the field of zkVMs.

Closing date for applications:

Contact: Maya

More information: https://grnh.se/0ee4300f1us

Expand
SandboxAQ
Job Posting Job Posting

We have postdoc and PhD residency positions available at SandboxAQ [1]. We seek people interested in doing research in the areas of post-quantum cryptography, privacy, and machine learning applied to cybersecurity. The positions are remote, but allow for travel to collaborate with team members. The postdoc residencies are initially for two years, but with the option to extend it to up to three years, on mutual agreement. PhD residencies are up to one year.

We are open to strong candidates that reinforce existing expertise of the team as well as candidates extending our expertise.

We are committed to creating an inclusive culture where we have zero tolerance for discrimination. We invest in our employees' personal and professional growth.

Learn more about what we’ve been doing so far by checking out our publications page [2] or the individual DBLP pages of our permanent researchers listed below for each of the teams associated with these residencies.

[1] www.sandboxaq.com [2] pub.sandboxaq.com

Rolling deadline. More information: https://www.sandboxaq.com/careers.

PQC members’ information:
  • Carlos Aguilar Melchor: https://dblp.org/pid/71/4606.html
  • Martin Albrecht: https://dblp.org/pid/92/7397.html
  • Nina Bindel: https://dblp.org/pid/167/3021.html
  • James Howe: https://dblp.org/pid/163/8680.html
  • Andreas Hülsing: https://dblp.org/pid/27/1744.html
Privacy members’ information:
  • Nicolas Gama: https://dblp.org/pid/49/4575.html
  • Sandra Guasch Castelló: https://dblp.org/pid/86/8292.html
ML for Cybersecurity contacts:
  • Raphael Labaca-Castro: raphael.labaca@sandboxaq.com
  • Parth Mishra: parth.mishra@sandboxaq.com

Closing date for applications:

Contact:

  • PQC: martin.albrecht@sandboxaq.com and james.howe@sandboxaq.com
  • Privacy: nicolas.gama@sandboxaq.com and sandra.guasch@sandboxaq.com
  • ML: raphael.labaca@sandboxaq.com and parth.mishra@sandboxaq.com

More information: https://www.sandboxaq.com/careers

Expand
University of Wollongong, Australia
Job Posting Job Posting
We are looking for a self-motivated postdoc in key-evolving cryptography and blockchain supported under ARC Discovery project. The project is planned to start in July 2024 and for four years. The candidate is required to have a PhD qualification in relevant research fields in cryptography, cybersecurity, computer science or blockchain. Please send your CV and related documents to the contact below.

Closing date for applications:

Contact: Dr Yannan Li (first_name@uow.edu.au)

Expand
University of Wollongong, Australia
Job Posting Job Posting
We are looking for self-motivated students in blockchain, especially in blockchain regulation. The position is planned to start in July 2024 and for a 3-year term (can be extended). The student requires a background in computer science or cryptography and holds Master's degree in computer science or equivalent, excellent English, communication, and teamwork skills. Experience in research and knowledge of zero-knowledge proofs will be a big plus. If you are interested in this position, please don't hesitate to send your CV and transcript to the contact below.

Closing date for applications:

Contact: Dr Yannan Li (first_name@uow.edu.au)

Expand
◄ Previous Next ►