International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

17 November 2023

Marshall Ball, Yevgeniy Dodis, Eli Goldin
ePrint Report ePrint Report
A backdoored Pseudorandom Generator (PRG) is a PRG which looks pseudorandom to the outside world, but a saboteur can break PRG security by planting a backdoor into a seemingly honest choice of public parameters, $pk$, for the system. Backdoored PRGs became increasingly important due to revelations about NIST’s backdoored Dual EC PRG, and later results about its practical exploitability.

Motivated by this, at Eurocrypt'15 Dodis et al. [21] initiated the question of immunizing backdoored PRGs. A $k$-immunization scheme repeatedly applies a post-processing function to the output of $k$ backdoored PRGs, to render any (unknown) backdoors provably useless. For $k=1$, [21] showed that no deterministic immunization is possible, but then constructed "seeded" $1$-immunizer either in the random oracle model, or under strong non-falsifiable assumptions. As our first result, we show that no seeded $1$-immunization scheme can be black-box reduced to any efficiently falsifiable assumption.

This motivates studying $k$-immunizers for $k\ge 2$, which have an additional advantage of being deterministic (i.e., "seedless"). Indeed, prior work at CCS'17 [37] and CRYPTO'18 [7] gave supporting evidence that simple $k$-immunizers might exist, albeit in slightly different settings. Unfortunately, we show that simple standard model proposals of [37, 7] (including the XOR function [7]) provably do not work in our setting. On a positive, we confirm the intuition of [37] that a (seedless) random oracle is a provably secure $2$-immunizer. On a negative, no (seedless) $2$-immunization scheme can be black-box reduced to any efficiently falsifiable assumption, at least for a large class of natural $2$-immunizers which includes all "cryptographic hash functions."

In summary, our results show that $k$-immunizers occupy a peculiar place in the cryptographic world. While they likely exist, and can be made practical and efficient, it is unlikely one can reduce their security to a "clean" standard-model assumption.
Expand
Jelle Vos, Mauro Conti, Zekeriya Erkin
ePrint Report ePrint Report
Private set intersection protocols allow two parties with private sets of data to compute the intersection between them without leaking other information about their sets. These protocols have been studied for almost 20 years, and have been significantly improved over time, reducing both their computation and communication costs. However, when more than two parties want to compute a private set intersection, these protocols are no longer applicable. While extensions exist to the multi-party case, these protocols are significantly less efficient than the two-party case. It remains an open question to design collusion-resistant multi-party private set intersection (MPSI) protocols that come close to the efficiency of two-party protocols. This work is made more difficult by the immense variety in the proposed schemes and the lack of systematization. Moreover, each new work only considers a small subset of previously proposed protocols, leaving out important developments from older works. Finally, MPSI protocols rely on many possible constructions and building blocks that have not been summarized. This work aims to point protocol designers to gaps in research and promising directions, pointing out common security flaws and sketching a frame of reference. To this end, we focus on the semi-honest model. We conclude that current MPSI protocols are not a one-size-fits-all solution, and instead there exist many protocols that each prevail in their own application setting.
Expand
Hanlin Zhang, Benjamin L. Edelman, Danilo Francati, Daniele Venturi, Giuseppe Ateniese, Boaz Barak
ePrint Report ePrint Report
Watermarking generative models consists of planting a statistical signal (watermark) in a model’s output so that it can be later verified that the output was generated by the given model. A strong watermarking scheme satisfies the property that a computationally bounded attacker cannot erase the watermark without causing significant quality degradation. In this paper, we study the (im)possibility of strong watermarking schemes. We prove that, under well-specified and natural assumptions, strong watermarking is impossible to achieve. This holds even in the private detection algorithm setting, where the watermark insertion and detection algorithms share a secret key, unknown to the attacker. To prove this result, we introduce a generic efficient watermark attack; the attacker is not required to know the private key of the scheme or even which scheme is used. Our attack is based on two assumptions: (1) The attacker has access to a “quality oracle” that can evaluate whether a candidate output is a high-quality response to a prompt, and (2) The attacker has access to a “perturbation oracle” which can modify an output with a nontrivial probability of maintaining quality, and which induces an efficiently mixing random walk on high-quality outputs. We argue that both assumptions can be satisfied in practice by an attacker with weaker computational capabilities than the watermarked model itself, to which the attacker has only black-box access. Furthermore, our assumptions will likely only be easier to satisfy over time as models grow in capabilities and modalities. We demonstrate the feasibility of our attack by instantiating it to attack three existing watermarking schemes for large language models: Kirchenbauer et al. (2023), Kuditipudi et al. (2023), and Zhao et al. (2023). The same attack successfully removes the watermarks planted by all three schemes, with only minor quality degradation.
Expand
Shiyu Li, Yuan Zhang, Yaqing Song, Hongbo Liu, Nan Cheng, Hongwei Li, Dahai Tao, Kan Yang
ePrint Report ePrint Report
Timed data delivery is a critical service for time-sensitive applications that allows a sender to deliver data to a recipient, but only be accessible at a specific future time. This service is typically accomplished by employing a set of mailmen to complete the delivery mission. While this approach is commonly used, it is vulnerable to attacks from realistic adversaries, such as a greedy sender (who accesses the delivery service without paying the service charge) and malicious mailmen (who release the data prematurely without being detected). Although some research works have been done to address these adversaries, most of them fail to achieve fairness.

In this paper, we formally define the fairness requirement for mailmen-assisted timed data delivery and propose a practical scheme, dubbed DataUber, to achieve fairness. DataUber ensures that honest mailmen receive the service charge, lazy mailmen do not receive the service charge, and malicious mailmen are punished. Specifically, DataUber consists of two key techniques: 1) a new cryptographic primitive, i.e., Oblivious and Verifiable Threshold Secret Sharing (OVTSS), enabling a dealer to distribute a secret among multiple participants in a threshold and verifiable way without knowing any one of the shares, and 2) a smart-contract-based complaint mechanism, allowing anyone to become a reporter to complain about a mailman's misbehavior to a smart contract and receive a reward. Furthermore, we formally prove the security of DataUber and demonstrate its practicality through a prototype implementation.
Expand
Uddipana Dowerah, Aikaterini Mitrokotsa
ePrint Report ePrint Report
As various industries and government agencies increasingly seek to build quantum computers, the development of post-quantum constructions for different primitives becomes crucial. Lattice-based cryptography is one of the top candidates for constructing quantum-resistant primitives. In this paper, we propose a decentralized Private Stream Aggregation (PSA) protocol based on the Learning with Errors (LWE) problem. PSA allows secure aggregation of time-series data over multiple users without compromising the privacy of the individual data. In almost all previous constructions, a trusted entity is used for the generation of keys. We consider a scenario where the users do not want to rely on a trusted authority. We, therefore, propose a decentralized PSA (DPSA) scheme where each user generates their own keys without the need for a trusted setup. We give a concrete construction based on the hardness of the LWE problem both in the random oracle model and in the standard model.
Expand
Hanwen Feng, Tiancheng Mai, Qiang Tang
ePrint Report ePrint Report
The classical distributed key generation protocols (DKG) are resurging due to their widespread applications in blockchain. While efforts have been made to improve DKG communication, practical large scale deployments are still yet to come, due to various challenges including broadcast channel scalability and worst-case complaint phase. In this paper, we propose a practical DKG for DL-based cryptosystems, with only (quasi-)linear computation/communication cost per participant, with the help of a public ledger, and beacon; Notably, our DKG only incurs constant-size blockchain storage cost for broadcast, even in the face of worst-case complaints. Moreover, our protocol satisfies adaptive security. The key to our improvements lies in delegating the most costly operations to an Any-Trust group. This group is randomly sampled and consists of a small number of individuals. The population only trusts that at least one member in the group is honest, without knowing which one. Additionally, we introduce an extended broadcast channel based on a blockchain and data dispersal network (such as IPFS), enabling reliable broadcasting of arbitrary-size messages at the cost of constant-size blockchain storage, which may be of independent interest.

Our DKG leads to a fully practical instantiation of Filecoin's checkpointing mechanism, in which all validators of a Proof-of-Stake (PoS) blockcahin periodically run DKG and threshold signing to create checkpoints on Bitcoin, thereby enhancing the security of the PoS chain. In comparison with another checkpointing approach of Babylon (Oakland, 2023), ours enjoys a significally smaller monetary cost of Bitcoin transaction fees. For a PoS chain with $2^{12}$ validators, our cost is merely 0.6% of that incurred by Babylon's approach.
Expand
Taiga Hiroka, Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
ePrint Report ePrint Report
A robust combiner combines many candidates for a cryptographic primitive and generates a new candidate for the same primitive. Its correctness and security hold as long as one of the original candidates satisfies correctness and security. A universal construction is a closely related notion to a robust combiner. A universal construction for a primitive is an explicit construction of the primitive that is correct and secure as long as the primitive exists. It is known that a universal construction for a primitive can be constructed from a robust combiner for the primitive in many cases.

Although robust combiners and universal constructions for classical cryptography are widely studied, robust combiners and universal constructions for quantum cryptography have not been explored so far. In this work, we define robust combiners and universal constructions for several quantum cryptographic primitives including one-way state generators, public-key quantum money, quantum bit commitments, and unclonable encryption, and provide constructions of them.

On a different note, it was an open problem how to expand the plaintext length of unclonable encryption. In one of our universal constructions for unclonable encryption, we can expand the plaintext length, which resolves the open problem.
Expand
Zhengjun Cao
ePrint Report ePrint Report
We show that the Nikooghadam-Shahriari-Saeidi authentication and key agreement scheme [J. Inf. Secur. Appl., 76, 103523 (2023)] cannot resist impersonation attack, not as claimed. An adversary can impersonate the RFID reader to cheat the RFID tag. The drawback results from its simple secret key invoking mechanism. We also find it seems difficult to revise the scheme due to the inherent flaw.
Expand
Horia Druliac, Matthew Bardsley, Chris Riches, Christian Dunn, Luke Harrison, Bimal Roy, Feng Hao
ePrint Report ePrint Report
India is the largest democracy by population and has one of the largest deployments of e-voting in the world for national elections. However, the e-voting machines used in India are not end-to-end (E2E) verifiable. The inability to verify the tallying integrity of an election by the public leaves the outcome open to disputes. E2E verifiable e-voting systems are commonly regarded as the most promising solution to address this problem, but they had not been implemented or trialed in India. It was unclear whether such systems would be usable and practical to the Indian people. Previous works such as Helios require a set of tallying authorities (TAs) to perform the decryption and tallying operations, but finding and managing TAs can prove difficult. This paper presents a TA-free E2E verifiable online voting system based on the DRE-ip protocol. In collaboration with the local authority of New Town, Kolkata, India, we conducted an online voting trial as part of the 2022 Durga Puja festival celebration, during which residents of New Town were invited to use mobile phones to vote for their favourite pujas (festival decorations) in an E2E verifiable manner. 543 participants attended the Durga Puja trial and 95 of them provided feedback by filling in an anonymous survey after voting. Based on the voter feedback, participants generally found the system easy to use. This was the first time that an E2E online voting system had been built and tested in India, suggesting its feasibility for non-statutory voting scenarios.
Expand
Amit Mazumder Shuvo, Tao Zhang, Farimah Farahmandi, Mark Tehranipoor
ePrint Report ePrint Report
Non-invasive fault injection attacks have emerged as significant threats to a spectrum of microelectronic systems ranging from commodity devices to high-end customized processors. Unlike their invasive counterparts, these attacks are more affordable and can exploit system vulnerabilities without altering the hardware physically. Furthermore, certain non-invasive fault injection strategies allow for remote vulnerability exploitation without the requirement of physical proximity. However, existing studies lack extensive investigation into these attacks across diverse target platforms, threat models, emerging attack strategies, assessment frameworks, and mitigation approaches. In this paper, we provide a comprehensive overview of contemporary research on non-invasive fault injection attacks. Our objective is to consolidate and scrutinize the various techniques, methodologies, target systems susceptible to the attacks, and existing mitigation mechanisms advanced by the research community. Besides, we categorize attack strategies based on several aspects, present a detailed comparison among the categories, and highlight research challenges with future direction. By underlining and discussing the landscape of cutting-edge, non-invasive fault injection, we hope more researchers, designers, and security professionals examine the attacks further and take such threats into consideration while developing effective countermeasures.
Expand
Randy Kuang, Maria Perepechaenko, Mahmoud Sayed, Dafu Lou
ePrint Report ePrint Report
In their 2022 study, Kuang et al. introduced the Multivariable Polynomial Public Key (MPPK) cryptography, a quantum-safe public key cryptosystem leveraging the mutual inversion relationship between multiplication and division. MPPK employs multiplication for key pair construction and division for decryption, generating public multivariate polynomials. Kuang and Perepechaenko expanded the cryptosystem into the Homomorphic Polynomial Public Key (HPPK), transforming product polynomials over large hidden rings using homomorphic encryption through modular multiplications. Initially designed for key encapsulation mechanism (KEM), HPPK ensures security through homomorphic encryption of public polynomials over concealed rings. This paper extends its application to a digital signature scheme. The framework of HPPK KEM can not be directly applied to the digital signatures dues to the different nature of verification procedure compared to decryption procedure. Thus, in order to use the core ideas of the HPPK KEM scheme in the framework of digital signatures, the authors introduce an extension of the Barrett reduction algorithm. This extension transforms modular multiplications over hidden rings into divisions in the verification equation, conducted over a prime field. The extended algorithm non-linearly embeds the signature into public polynomial coefficients, employing the floor function of big integer divisions. This innovative approach overcomes vulnerabilities associated with linear relationships of earlier MPPK DS schemes. The security analysis reveals exponential complexity for both private key recovery and forged signature attacks, taking into account that the bit length of the rings is twice that of the prime field size. The effectiveness of the proposed Homomorphic Polynomial Public Key Digital Signature (HPPK DS) scheme is illustrated through a practical toy example, showcasing its intricate functionality and enhanced security features.
Expand
Patrick Karl, Jonas Schupp, Georg Sigl
ePrint Report ePrint Report
SPHINCS+ is a signature scheme included in the first NIST post-quantum standard, that bases its security on the underlying hash primitive. As most of the runtime of SPHINCS+ is caused by the evaluation of several hash- and pseudo-random functions, instantiated via the hash primitive, offloading this computation to dedicated hardware accelerators is a natural step. In this work, we evaluate different architectures for hardware acceleration of such a hash primitive with respect to its use-case and evaluate them in the context of SPHINCS+. We attach hardware accelerators for different hash primitives (SHAKE256 and Asconxof for both full and round-reduced versions) to CPU interfaces having different transfer speeds. We show, that for most use-cases, data transfer determines the overall performance if accelerators are equipped with FIFOs.
Expand
Aurel Page, Damien Robert
ePrint Report ePrint Report
In this short note, we present a simplified (but slower) version Clapoti of Clapotis, whose full description will appear later. Let ?/?_? be an elliptic curve with an effective primitive orientation by a quadratic imaginary order ? ⊂ End(?). Let ? be an invertible ideal in ?. Clapoti is a randomized polynomial time algorithm in ? ((log Δ_? + log ?)^?(1) ) operations to compute the class group action ? ↦ ?_? ≃ ?/?[?].
Expand
Noam Mazor, Rafael Pass
ePrint Report ePrint Report
The Perebor (Russian for “brute-force search”) conjectures, which date back to the 1950s and 1960s are some of the oldest conjectures in complexity theory. The conjectures are a stronger form of the NP ̸ = P conjecture (which they predate) and state that for “meta-complexity” problems, such as the Time-bounded Kolmogorov complexity Problem, and the Minimum Circuit Size Problem, there are no better algorithms than brute force search.

In this paper, we disprove the non-uniform version of the Perebor conjecture for the Time-Bounded Kolmogorov complexity problem. We demonstrate that for every polynomial t(·), there exists of a circuit of size $2^{4n/5+o(n)}$ that solves the t(·)-bounded Kolmogorov complexity problem on every instance.

Our algorithm is black-box in the description of the Universal Turing Machine employed in the definition of Kolmogorov Complexity, and leverages the characterization of one-way functions through the hardness of the time-bounded Kolmogorov complexity problem of Liu and Pass (FOCS’20), and the time-space trade-off for one-way functions of Fiat and Naor (STOC’91). We additionally demonstrate that no such black-box algorithm can have sub-exponential circuit size.

Along the way (and of independent interest), we extend the result of Fiat and Naor and demonstrate that any efficiently computable function can be inverted (with probability 1) by a circuit of size 2^{4n/5+o(n)}; as far as we know, this yields the first formal proof that a non-trivial circuit can invert any efficient function.
Expand
Yu Wei, Jingyu Jia, Yuduo Wu, Changhui Hu, Changyu Dong, Zheli Liu, Xiaofeng Chen, Yun Peng, Shaowei Wang
ePrint Report ePrint Report
How to achieve distributed differential privacy (DP) without a trusted central party is of great interest in both theory and practice. Recently, the shuffle model has attracted much attention. Unlike the local DP model in which the users send randomized data directly to the data collector/analyzer, in the shuffle model an intermediate untrusted shuffler is introduced to randomly permute the data, which have already been randomized by the users, before they reach the analyzer. The most appealing aspect is that while shuffling does not explicitly add more noise to the data, it can make privacy better. The privacy amplification effect in consequence means the users need to add less noise to the data than in the local DP model, but can achieve the same level of differential privacy. Thus, protocols in the shuffle model can provide better accuracy than those in the local DP model. What looks interesting to us is that the architecture of the shuffle model is similar to private aggregation, which has been studied for more than a decade. In private aggregation, locally randomized user data are aggregated by an intermediate untrusted aggregator. Thus, our question is whether aggregation also exhibits some sort of privacy amplification effect? And if so, how good is this ``aggregation model'' in comparison with the shuffle model. We conducted the first comparative study between the two, covering privacy amplification, functionalities, protocol accuracy, and practicality. The results as yet suggest that the new shuffle model does not have obvious advantages over the old aggregation model. On the contrary, protocols in the aggregation model outperform those in the shuffle model, sometimes significantly, in many aspects.
Expand
Mu Yuan, Lan Zhang, Xiang-Yang Li
ePrint Report ePrint Report
We present a three-party protocol that can protect both Transformer parameters and user data during the inference phase. For each feedforward inference process, our protocol only introduces permutation computation of input and output data on the user side. Our protocol, Secure Transformer Inference Protocol (STIP), can be applied to real-world services like ChatGPT.
Expand
Daniel Luick, John Kolesar, Timos Antonopoulos, William R. Harris, James Parker, Ruzica Piskac, Eran Tromer, Xiao Wang, Ning Luo
ePrint Report ePrint Report
Verification of program safety is often reducible to proving the unsatisfiability (i.e., validity) of a formula in Satisfiability Modulo Theories (SMT): Boolean logic combined with theories that formalize arbitrary first-order fragments. Zero-knowledge (ZK) proofs allow SMT formulas to be validated without revealing the underlying formulas or their proofs to other parties, which is a crucial building block for proving the safety of proprietary programs. Recently, Luo et al. (CCS 2022) studied the simpler problem of proving the unsatisfiability of pure Boolean formulas, but it does not support safety proofs generated by SMT solvers. This work presents ZKSMT, a novel framework for proving the validity of SMT formulas in ZK. We design a virtual machine (VM) tailored to efficiently represent the verification process of SMT validity proofs in ZK. Our VM can support the vast majority of popular theories when proving program safety while being complete and sound. To demonstrate this, we instantiate the commonly used theories of equality and linear integer arithmetic in our VM with theory-specific optimizations for proving them in ZK. ZKSMT achieves high practicality even when running on realistic SMT formulas generated by Boogie, a common tool for software verification. It achieves a three-order-of-magnitude improvement compared to a baseline that executes the proof verification code in a general ZK system.
Expand
Elsie Mestl Fondevik, Britta Hale, Xisen Tian
ePrint Report ePrint Report
Post-compromise security (PCS) has been a core goal of end-to-end encrypted messaging applications for many years, both in one-to-one continuous key agreement (CKA) and for groups (CGKA). At its essence, PCS relies on a compromised party to perform a key update in order to `self-heal'. However, due to bandwidth constraints, receive-only mode, and various other environmental demands of the growing number of use cases for such CGKA protocols, a group member may not be able to issue such updates. In this work, we address the issue of devices functioning in limited mode through the introduction of guardianship, where a designated guardian can perform key updates on the behalf of its paired edge device. We introduce a Guardianship PCS (GPCS) security, and provide an associated security experiment. We investigate various architectural designs in the pursuit of GPCS, provide constructions and security analyses, and describe trade-offs.
Expand
Luk Bettale, Delaram Kahrobaei, Ludovic Perret, Javier Verbel
ePrint Report ePrint Report
This paper describes Biscuit, a new multivariate-based signature scheme derived using the MPCitH approach. The security of Biscuit is related to the problem of solving a set of quadratic structured systems of algebraic equations. These equations are highly compact and can be evaluated using very few multiplications. The core of Biscuit is a rather simple MPC protocol which consists of the parallel execution of a few secure multiplications using standard optimized multiplicative triples. This paper also includes several improvements with respect to Biscuit submission to the last NIST PQC standardization process for additional signature schemes. Notably, we introduce a new hypercube variant of Biscuit, refine the security analysis with recent third-party attacks, and present a new avx2 implementation of Biscuit.
Expand

14 November 2023

Gongxian Zeng, Junzuo Lai, Zhengan Huang, Linru Zhang, Xiangning Wang, Kwok-Yan Lam, Huaxiong Wang, Jian Weng
ePrint Report ePrint Report
In this paper, we consider to generalize NIZK by empowering a prover to share a witness in a fine-grained manner with verifiers. Roughly, the prover is able to authorize a verifier to obtain extra information of witness, i.e., besides verifying the truth of the statement, the verifier can additionally obtain certain function of the witness from the accepting proof using a secret functional key provided by the prover.

To fulfill these requirements, we introduce a new primitive called \emph{non-interactive zero-knowledge functional proofs (fNIZKs)}, and formalize its security notions. We provide a generic construction of fNIZK for any $\textsf{NP}$ relation $\mathcal{R}$, which enables the prover to share any function of the witness with a verifier. For a widely-used relation about set membership proof (implying range proof), we construct a concrete and efficient fNIZK, through new building blocks (set membership encryption and dual inner-product encryption), which might be of independent interest.
Expand
◄ Previous Next ►