International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

25 February 2022

Mark Zhandry
ePrint Report ePrint Report
Generic groups are an important tool for analyzing the feasibility and in-feasibility of group-based cryptosystems. There are two distinct wide-spread versions of generic groups, Shoup's and Maurer's, the main difference being whether or not group elements are given explicit labels. The two models are often treated as equivalent. In this work, however, we demonstrate that the models are in fact quite different, and care is needed when stating generic group results:

- We show that numerous textbook constructions are *not* captured by Maurer, but are captured by Shoup. In the other direction, any construction captured by Maurer *is* captured by Shoup.

- For constructions that exist in both models, we show that security is equivalent for "single stage" games, but Shoup security is strictly stronger than Maurer security for some "multi-stage" games.

- The existing generic group un-instantiability results do not apply to Maurer. We fill this gap with a new un-instantiability result.

- We explain how the known black box separations between generic groups and identity-based encryption do not fully apply to Shoup, and resolve this by providing such a separation.

- We give a new un-instantiability result for the *algebraic* group model.
Expand
Monika Henzinger, Jalaj Upadhyay
ePrint Report ePrint Report
We study fine-grained error bounds for differentially private algorithms for averaging and counting in the continual observation model. For this, we use the completely bounded spectral norm (cb norm) from operator algebra. For a matrix $W$, its cb norm is defined as \[ \|{W}\|_{\mathsf{cb}} = \max_{Q} \left\{ \frac{\|{Q \bullet W}\|}{\|{Q}\|} \right\}, \] where $Q \bullet W$ denotes the Schur product and $\|{\cdot}\|$ denotes the spectral norm. We bound the cb norm of two fundamental matrices studied in differential privacy under the continual observation model: the counting matrix $M_{\mathsf{counting}}$ and the averaging matrix $M_{\mathsf{average}}$. For $M_{\mathsf{counting}}$, we give lower and upper bound whose additive gap is $1 + \frac{1}{\pi}$. Our factorization also has two desirable properties sufficient for streaming setting: the factorization contains of lower-triangular matrices and the number of distinct entries in the factorization is exactly $T$. This allows us to compute the factorization on the fly while requiring the curator to store a $T$-dimensional vector. For $M_{\mathsf{average}}$, we show an additive gap between the lower and upper bound of $\approx 0.64$.
Expand
Daniel Rausch, Ralf Kuesters, Céline Chevalier
ePrint Report ePrint Report
Universal Composability is a widely used concept for the design and analysis of protocols. Since Canetti's original UC model and the model by Pfitzmann and Waidner several different models for universal composability have been proposed, including, for example, the IITM model, GNUC, CC, but also extensions and restrictions of the UC model, such as JUC, GUC, and SUC. These were motivated by the lack of expressivity of existing models, ease of use, or flaws in previous models. Cryptographers choose between these models based on their needs at hand (e.g., support for joint state and global state) or simply their familiarity with a specific model. While all models follow the same basic idea, there are huge conceptually differences, which raises fundamental and practical questions: (How) do the concepts and results proven in one model relate to those in another model? Do the different models and the security notions formulated therein capture the same classes of attacks? Most importantly, can cryptographers re-use results proven in one model in another model, and if so, how?

In this paper, we initiate a line of research with the aim to address this lack of understanding, consolidate the space of models, and enable cryptographers to re-use results proven in other models. As a start, here we focus on Canetti's prominent UC model and the IITM model proposed by Kuesters et al. The latter is an interesting candidate for comparison with the UC model since it has been used to analyze a wide variety of protocols, supports a very general protocol class and provides, among others, seamless treatment of protocols with shared state, including joint and global state. Our main technical contribution is an embedding of the UC model into the IITM model showing that all UC protocols, security and composition results carry over to the IITM model. Hence, protocol designers can profit from the features of the IITM model while being able to use all their results proven in the UC model. We also show that, in general, one cannot embed the full IITM model into the UC model.
Expand
Thibauld Feneuil, Jules Maire, Matthieu Rivain, Damien Vergnaud
ePrint Report ePrint Report
We propose (honest verifier) zero-knowledge arguments for the modular subset sum problem. Given a set of integers, this problem asks whether a subset adds up to a given integer t modulo a given integer q. This NP-complete problem is considered since the 1980s as an interesting alternative in cryptography to hardness assumptions based on number theory and it is in particular believed to provide post-quantum security. Previous combinatorial approaches, notably one due to Shamir, yield arguments with cubic communication complexity (in the security parameter). More recent methods, based on the MPC-in-the-head technique, also produce arguments with cubic communication complexity and only for prime modulus q.

We improve this approach by using a secret-sharing over small integers (rather than modulo q) to reduce the size of the arguments and remove the prime modulus restriction. Since this sharing may reveal information on the secret subset, we introduce the idea of rejection to the MPC-in-the-head paradigm. Special care has to be taken to balance completeness and soundness and preserve zero-knowledge of our arguments. We combine this idea with two techniques to prove that the secret vector (which selects the subset) is well made of binary coordinates. Our new techniques have the significant advantage to result in arguments of size independent of the modulus q.

Our new protocols for the subset sum problem achieve an asymptotic improvement by producing arguments of quadratic size (against cubic size for previous proposals). This improvement is also practical: for a 256-bit modulus q, the best variant of our protocols yields 13KB arguments while previous proposals gave 1180KB arguments, for the best general protocol, and 122KB, for the best protocol restricted to prime modulus. Our techniques can also be applied to vectorial variants of the subset sum problem and in particular the inhomogeneous short integer solution (ISIS) problem for which they provide competitive alternatives to state-of-the-art protocols. We also show the application of our protocol to build efficient zero-knowledge arguments of plaintext and/or key knowledge in the context of fully-homomorphic encryption. When applied to the TFHE scheme, the obtained arguments are more than 20 times smaller than those obtained with previous protocols.
Expand
Yanbo Chen, Yunlei Zhao
ePrint Report ePrint Report
An aggregate signature (AS) scheme allows an unspecified aggregator to compress many signatures into a short aggregation. AS schemes can save storage costs and accelerate verification. They are desirable for applications where many signatures need to be stored, transferred, or verified together, like blockchain systems, network routing, e-voting, and certificate chains. However, constructing AS schemes based on general groups, only requiring the hardness of the discrete logarithm problem, is quite tricky and has been a long-standing research question. Recently, Chalkias et al. (CT-RSA 2021) proposed a half-aggregate scheme for Schnorr signatures. We observe the scheme lacks a tight security proof and does not well support incremental aggregation, i.e., adding more signatures into a pre-existing aggregation. Chalkias et al. also presented an aggregate scheme for Schnorr signatures whose security can be tightly reduced to the security of Schnorr signatures in the random oracle model (ROM). However, the scheme is rather expensive and does not achieve half-aggregation. It is a fundamental question whether there exists half-aggregation of Schnorr signatures with tight reduction in the ROM, of both theoretical and practical interests.

This work's contributions are threefold. We first give a tight security proof for the scheme in CT-RSA 2021 in the ROM and the algebraic group model (AGM). Second, we provide a new half-aggregate scheme for Schnorr signatures that perfectly supports incremental aggregation, whose security also tightly reduces to Schnorr's security in the AGM+ROM. Third, we present a Schnorr-based sequential aggregate signature (SAS) scheme that is tightly secure as Schnorr signature scheme in the ROM (without the AGM). Our work may pave the way for applying Schnorr aggregation in real-world cryptographic applications.
Expand
Onur Gunlu, Matthieu Bloch, Rafael F. Schaefer, Aylin Yener
ePrint Report ePrint Report
This work considers mitigation of information leakage between communication and sensing operations in joint communication and sensing systems. Specifically, a discrete memoryless state-dependent broadcast channel model is studied in which (i) the presence of feedback enables a transmitter to simultaneously achieve reliable communication and channel state estimation; (ii) one of the receivers is treated as an eavesdropper whose state should be estimated but which should remain oblivious to a part of the transmitted information. The model abstracts the challenges behind security for joint communication and sensing if one views the channel state as a characteristic of the receiver, e.g., its location. For independent identically distributed (i.i.d.) states, perfect output feedback, and when part of the transmitted message should be kept secret, a partial characterization of the secrecy-distortion region is developed. The characterization is exact when the broadcast channel is either physically-degraded or reversely-physically-degraded. The characterization is also extended to the situation in which the entire transmitted message should be kept secret. The benefits of a joint approach compared to separation-based secure communication and state-sensing methods are illustrated with a binary joint communication and sensing model.
Expand
Keita Emura, Shiho Moriai, Takuma Nakajima, Masato Yoshimi
ePrint Report ePrint Report
Cache systems are crucial for reducing communication overhead on the Internet. The importance of communication privacy is being increasingly and widely recognized; therefore, we anticipate that nearly all end-to-end communication will be encrypted via secure sockets layer/transport layer security (SSL/TLS) in the near future. Herein we consider a catch-22 situation, wherein the cache server checks whether content has been cached or not, i.e., the cache server needs to observe it, thereby violating end-to-end encryption. We avoid this catch-22 situation by proposing an encrypted cache system which we call Cache-22. To maximize its deployability, we avoid heavy, advanced cryptographic tools, and instead base our Cache-22 system purely on traditional SSL/TLS communication. It employs tags for searching, and its design concept enables the service provider to decide, e.g., via an authentication process, whether or not a particular user should be allowed to access particular content. We provide a prototype implementation of the proposed system using the color-based cooperative cache proposed by Nakajima et al. (IEICE Trans. 2017) under several ciphersuites containing post-quantum key exchanges in addition to ECDHE (Elliptic Curve-based). We consider NIST Post-Quantum Cryptography round 3 finalists and alternate candidates: lattice-based (Kyber, SABER, NTRU), code-based (BIKE), and isogeny-based (SIKE). Compared to direct HTTPS communication between a service provider and a user, employing our Cache-22 system has a merit to drastically reduce communications between a cache server and the service provider (approximately 95%) which is effective in a hierarchical network with a cost disparity.
Expand
Hanyu Jia, Xiangxue Li
ePrint Report ePrint Report
We consider private function evaluation (PFE) in malicious adversary model. Current state-of-the-art in PFE from Valiant's universal circuits (Liu, Yu, et al., CRYPTO 2021) does not avoid the logarithmic factor in circuit size. In constructing linear active PFE, one essential building block is to prove the correctness of an extended permutation (EP, Mohassel and Sadeghian at EUROCRYPT 2013) by zero-knowledge protocols with linear complexity. The linear instantiation $\mathcal{ZK}_{EP}$ by Mohassel, Sadeghian, and Smart (ASIACRYPT 2014) is a three-phase protocol, and each phase (dummy placement, replication, and permutation) is of size $2g$. Its overhead thus seems really outrageous, reducing its practicability. We present in this paper a novel and efficient framework $\mathcal{ZK}_{DS}$ for proving the correct EP. We show that \underline{d}ouble \underline{s}huffles suffice for EP (exponentiations and communication overheads are about 27% and 31% of $\mathcal{ZK}_{EP}$, respectively). Data owner(s) generates the randomness for the first shuffle whose outputs determine outgoing wires of the circuit defined by the function. Function owner reuses and extends the randomness in the second shuffle whose outputs determine the incoming wires. From $\mathcal{ZK}_{DS}$, we build an online/offline PFE framework with linear active security. The online phase could be instantiated by any well-studied secure function evaluation (SFE) with linear active security (e.g., Tiny-OT at CRYPTO 2012). The offline phase depends only on the private function $f$ and uses $\mathcal{ZK}_{DS}$ to prove the EP relationship between outgoing wires and incoming wires in the circuit $\mathcal{C}_f$ derived from $f$.
Expand
Per Austrin, Hao Chung, Kai-Min Chung, Shiuan Fu, Yao-Ting Lin, Mohammad Mahmoody
ePrint Report ePrint Report
We study the following question, first publicly posed by Hosoyamada and Yamakawa in 2018. Can parties Alice and Bob with quantum computing power and classical communication rely only on a random oracle (that can be queried in quantum superposition) to agree on a key that is private from eavesdroppers? We make the first progress on the question above and prove the following.

When only one of the parties is classical and the other party is quantum powered, as long as they ask a total of $d$ oracle queries and agree on a key with probability $1$, then there is always a way to break the key agreement by asking $O(d^2)$ number of classical oracle queries. When both parties can make quantum queries to the random oracle, we introduce a natural conjecture, which if true would imply attacks with $poly(d)$ classical queries to the random oracle. Our conjecture, roughly speaking, states that the multiplication of any two degree-$d$ real-valued polynomials over the Boolean hypercube of influence at most $1/poly(d)$ is nonzero. We then prove our conjecture for exponentially small influences, which leads to an (unconditional) classical $2^{O(md)}$-query attack on any such key agreement protocol, where $m$ is the oracle's output length.

Since our attacks are classical, we then ask whether it is always possible to find classical attacks on key agreements with imperfect completeness in the quantum random oracle model. We proves a barrier for this approach, by showing that if the folklore “Simulation Conjecture” (first formally stated by Aaronson and Ambainis in 2009) about the possibility of simulating efficient-query quantum algorithms using efficient-query classical algorithms is false, then there is in fact such a secure key agreement in the quantum random oracle model that cannot be broken classically.
Expand
Luke Beckwith, Duc Tri Nguyen, Kris Gaj
ePrint Report ePrint Report
Many currently deployed public-key cryptosystems are based on the difficulty of the discrete logarithm and integer factorization problems. However, given an adequately sized quantum computer, these problems can be solved in polynomial time as a function of the key size. Due to the future threat of quantum computing to current cryptographic standards, alternative algorithms that remain secure under quantum computing are being evaluated for future use. As a part of this evaluation, high-performance implementations of these candidate algorithms must be investigated. This work presents a high-performance implementation of all operations of CRYSTALS-Dilithium and one operation of FALCON (signature verification) targeting FPGAs. In particular, we present a Dilithium design that achieves the best latency for an FPGA implementation to date and, to the best of our knowledge, the first FALCON hardware implementation to date. We compare our results with the hardware implementations of all viable NIST Round 3 post-quantum digital signature candidates.
Expand
Nishanth Chandran, Bhavana Kanukurthi, Sai Lakshmi Bhavana Obbattu, Sruthi Sekar
ePrint Report ePrint Report
Leakage resilient secret sharing (LRSS) allows a dealer to share a secret amongst $n$ parties such that any authorized subset of the parties can recover the secret from their shares, while an adversary that obtains shares of any unauthorized subset of parties along with bounded leakage from the other shares learns no information about the secret. Non-malleable secret sharing (NMSS) provides a guarantee that even shares that are tampered by an adversary will reconstruct to either the original message or something independent of it.

The most important parameter of LRSS and NMSS schemes is the size of each share. For LRSS, in the "local leakage model" (i.e., when the leakage functions on each share are independent of each other and bounded), Srinivasan and Vasudevan (CRYPTO 2019), gave a scheme for threshold access structures with a share size of approximately ($3$.(message length) + $\mu$), where $\mu$ is the number of bits of leakage tolerated from every share. For the case of NMSS, the best known result (again due to the above work) has a share size of ($11$.(message length)).

In this work, we build LRSS and NMSS schemes with much improved share sizes. Additionally, our LRSS scheme obtains optimal share and leakage size. In particular, we get the following results:

-We build an information-theoretic LRSS scheme for threshold access structures with a share size of ((message length) + $\mu$).

-As an application of the above result, we obtain an NMSS with a share size of ($4$.(message length)). Further, for the special case of sharing random messages, we obtain a share size of ($2$.(message length)).
Expand
Ky Nguyen, Duong Hieu Phan, David Pointcheval
ePrint Report ePrint Report
Multi-Client Functional Encryption ($\mathsf{MCFE}$) has been considered as an important primitive for making functional encryption useful in practice. It covers the ability to compute joint function over data from multiple parties similar to Multi-Input Functional Encryption ($\mathsf{MIFE}$) but it handles information leakage better than $\mathsf{MIFE}$. Both the $\mathsf{MCFE}$ and $\mathsf{MIFE}$ primitives are aimed at applications in multi-user settings where decryption can be correctly output for legitimate users only. In such a setting, the problem of dealing with access control in a fine-grained manner is particularly relevant. In this paper, we introduce a framework for $\mathsf{MCFE}$ with fine-grained access control and propose constructions for both single-client and multi-client settings, with selective and adaptive security. The only known work that combines functional encryption in multi-user setting with access control was proposed by Abdalla $\mathit{et al.}$ (Asiacrypt '20), which relies on a generic transformation from the single-client schemes to obtain $\mathsf{MIFE}$ schemes that suffer a quadratic factor of $n$ (where $n$ denotes the number of clients) in the ciphertext size. We present a {duplicate-and-compress} technique to transform the single-client scheme and obtain a $\mathsf{MCFE}$ with fine-grained access control scheme with only a linear factor of $n$ in the ciphertext size. Our final scheme thus outperforms the Abdalla $\mathit{et al.}$'s scheme by a factor $n$, while $\mathsf{MCFE}$ is more difficult to achieve than $\mathsf{MIFE}$ (one can obtain $\mathsf{MIFE}$ from $\mathsf{MCFE}$ by making all the labels in $\mathsf{MCFE}$ a fixed public constant).
Expand
Ward Beullens
ePrint Report ePrint Report
This work introduces new key recovery attacks against the Rainbow signature scheme, which is one of the three finalist signature schemes still in the NIST Post-Quantum Cryptography standardization project. The new attacks outperform previously known attacks for all the parameter sets submitted to NIST and make a key-recovery practical for the SL 1 parameters. Concretely, given a Rainbow public key for the SL 1 parameters of the second-round submission, our attack returns the corresponding secret key after on average 53 hours (one weekend) of computation time on a standard laptop.
Expand
Jan Bobolz, Fabian Eidens, Stephan Krenn, Sebastian Ramacher, Kai Samelin
ePrint Report ePrint Report
Attribute-based credential systems enable users to authenticate in a privacy-preserving manner. However, in such schemes verifying a user's credential requires knowledge of the issuer's public key, which by itself might already reveal private information about the user.

In this paper, we tackle this problem by introducing the notion of issuer-hiding attribute-based credential systems. In such a system, the verifier can define a set of acceptable issuers in an ad-hoc manner, and the user can then prove that her credential was issued by one of the accepted issuers -- without revealing which one.

We then provide a generic construction, as well as a concrete instantiation based on Groth's structure preserving signature scheme (ASIACRYPT'15) and simulation-sound extractable NIZK, for which we also provide concrete benchmarks in order to prove its practicability.

The online complexity of all constructions is independent of the number of acceptable verifiers, which makes it also suitable for highly federated scenarios.
Expand
Boyue Fang, Weize Wang, Yunlei Zhao
ePrint Report ePrint Report
Kyber is a candidate in the third round of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) Standardization. However, because of the protocol's independence assumption, the bound on the decapsulation failure probability resulting from the original analysis is not tight. In this work, we give a rigorous mathematical analysis of the actual failure probability calculation, and provides the Kyber security estimation in reality rather than only in a statistical sense. Our analysis does not make independency assumptions on errors, and is with respect to concrete public keys in reality. Through sample test and experiments, we also illustrate the difference between the actual failure probability and the result given in the proposal of Kyber. The experiments show that, for Kyber-512 and 768, the failure probability resulting from the original paper is relatively conservative, but for Kyber-1024, the failure probability of some public keys is worse than claimed. This failure probability calculation for concrete public keys can also guide the selection of public keys in the actual application scenarios. What's more, we measure the gap between the upper bound of the failure probability and the actual failure probability, then give a tight estimate. Our work can also re-evaluate the traditional $1-\delta$ correctness in the literature, which will help re-evaluate some candidates' security in NIST post-quantum cryptographic standardization.
Expand
Gweonho Jeong, Nuri Lee, Jihye Kim, Hyunok Oh
ePrint Report ePrint Report
With the rapid growth of the blockchain market, privacy and security issues for digital assets are becoming more and more important. In the most widely used public blockchains such as Bitcoin and Ethereum, all activities on user accounts are publicly disclosed and also violate privacy regulations such as EU GDPR. Encryption of accounts and transactions may protect privacy, but it also raises issues of validity and transparency: encrypted information alone cannot verify the validity of a transaction and makes it difficult to meet antimoney laundering, i.e. auditability.

To solve the above problem, we propose an auditable zero-knowledge transfer framework called Azeroth. Azeroth connects a zero-knowledge proof for an encrypted transaction, enabling to check its validation while protecting its privacy. Azeroth also allows authorized auditors to audit transactions. Azeroth is designed as a smart contract for flexible deployment on top of an existing blockchain. According to the result of our experiment, the additional time required to generate a proof is about 901ms.The security of Azeroth is formally proven under the cryptographic assumptions.
Expand
Vienna, Austria, 23 August - 26 August 2022
Event Calendar Event Calendar
Event date: 23 August to 26 August 2022
Submission deadline: 6 March 2022
Notification: 16 May 2022
Expand
-
Event Calendar Event Calendar
Event date: to
Submission deadline: 23 November 2022
Notification: 23 January 2023
Expand
-
Event Calendar Event Calendar
Event date: to
Submission deadline: 1 September 2022
Notification: 1 November 2022
Expand
-
Event Calendar Event Calendar
Event date: to
Submission deadline: 1 June 2022
Notification: 1 August 2022
Expand
◄ Previous Next ►