International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

22 September 2021

Yevgeniy Dodis, Willy Quach, Daniel Wichs
ePrint Report ePrint Report
The goal of the bounded storage model (BSM) is to construct unconditionally secure cryptographic protocols, by only restricting the storage capacity of the adversary, but otherwise giving it unbounded computational power. Here, we consider a streaming variant of the BSM, where honest parties can stream huge amounts of data to each other so as to overwhelm the adversary's storage, even while their own storage capacity is significantly smaller than that of the adversary. Prior works showed several impressive results in this model, including key agreement and oblivious transfer, but only as long as adversary's storage $m = O(n^2)$ is at most quadratically larger than the honest user storage $n$. Moreover, the work of Dziembowski and Maurer (DM) also gave a seemingly matching lower bound, showing that key agreement in the BSM is impossible when $m > n^2$.

In this work, we observe that the DM lower bound only applies to a significantly more restricted version of the BSM, and does not apply to the streaming variant. Surprisingly, we show that it is possible to construct key agreement and oblivious transfer protocols in the streaming BSM, where the adversary's storage can be significantly larger, and even exponential $m = 2^{O(n)}$. The only price of accommodating larger values of $m$ is that the round and communication complexities of our protocols grow accordingly, and we provide lower bounds to show that an increase in rounds and communication is necessary.

As an added benefit of our work, we also show that our oblivious transfer (OT) protocol in the BSM satisfies a simulation-based notion of security. In contrast, even for the restricted case of $m = O(n^2)$, prior solutions only satisfied a weaker indistinguishability based definition. As an application of our OT protocol, we get general multiparty computation (MPC) in the BSM that allows for up to exponentially large gaps between $m$ and $n$, while also achieving simulation-based security.
Expand
Antonio Faonio
ePrint Report ePrint Report
A randomness encoder is a generalization of encoding schemes with an efficient procedure for encoding \emph{uniformly random strings}. In this paper we continue the study of randomness encoders that additionally have the property of being continuous non-malleable. The beautiful notion of non-malleability for encoding schemes, introduced by Dziembowski, Pietrzak and Wichs (ICS’10), states that tampering with the codeword can either keep the encoded message identical or produce an uncorrelated message. Continuous non-malleability extends the security notion to a setting where the adversary can tamper the codeword polynomially many times and where we assume a self-destruction mechanism in place in case of decoding errors. Our contributions are: (1) two practical constructions of continuous non-malleable randomness encoders in the random oracle model, and (2) a new compiler from continuous non-malleable randomness encoders to continuousnon-malleable codes, and (3) a study of lower bounds for continuous non-malleability in the random oracle model.
Expand
Junzuo Lai, Rupeng Yang, Zhengan Huang, Jian Weng
ePrint Report ePrint Report
Selective opening attacks (SOA) (for public-key encryption, PKE) concern such a multi-user scenario, where an adversary adaptively corrupts some fraction of the users to break into a subset of honestly created ciphertexts, and tries to learn the information on the messages of some unopened (but potentially related) ciphertexts. Until now, the notion of selective opening attacks is only considered in two settings: sender selective opening (SSO), where part of senders are corrupted and messages together with randomness for encryption are revealed; and receiver selective opening (RSO), where part of receivers are corrupted and messages together with secret keys for decryption are revealed.

In this paper, we consider a more natural and general setting for selective opening security. In the setting, the adversary may adaptively corrupt part of senders and receivers \emph{simultaneously}, and get the plaintext messages together with internal randomness for encryption and secret keys for decryption, while it is hoped that messages of uncorrupted parties remain protected. We denote it as Bi-SO security since it is reminiscent of Bi-Deniability for PKE.

We first formalize the requirement of Bi-SO security by the simulation-based (SIM) style, and prove that some practical PKE schemes achieve SIM-Bi-$\text{SO}$-CCA security in the random oracle model. Then, we suggest a weak model of Bi-SO security, denoted as SIM-wBi-$\text{SO}$-CCA security, and argue that it is still meaningful and useful. We propose a generic construction of PKE schemes that achieve SIM-wBi-$\text{SO}$-CCA security in the standard model and instantiate them from various standard assumptions. Our generic construction is built on a newly presented primitive, namely, universal$_{\kappa}$ hash proof system with key equivocability, which may be of independent interest.
Expand
Jan Czajkowski
ePrint Report ePrint Report
We prove classical and quantum indifferentiability of a rate-1/3 compression function introduced by Shrimpton and Stam (ICALP '08). This construction was one of the first constructions based on three random functions that achieved optimal collision-resistance. We also prove that our result is tight, we define a classical and a quantum attackers that match the indifferentiability security level. Our tight indifferentiability results provide a negative result on the optimality of security of the construction by Shrimpton and Stam, security level of the strong indifferentiability notion is below that of collision-resistance.

To arrive at these results, we generalize the results of Czajkowski, Majenz, Schaffner, and Zur (arXiv '19). Our generalization allows to analyze quantum security of constructions based on multiple independent random functions, something not possible before.
Expand
Zhiqiang Wu, Jin Wang, Keqin Li
ePrint Report ePrint Report
Many recent studies focus on dynamic searchable encryption (DSE), which provides efficient data-search and data-update services directly on outsourced private data. Most encryption schemes are not optimized for update-intensive cases, which say that the same data record is frequently added and deleted from the database. How to build an efficient and secure DSE scheme for update-intensive data is still challenging. We propose UI-SE, the first DSE scheme that achieves single-round-trip interaction, near-zero client storage, and backward privacy without any insertion patterns. UI-SE involves a new tree data structure, named OU-tree, which supports oblivious data updates without any access-pattern leakage. We formally prove that UI-SE is adaptively secure under Type-1$^-$ backward privacy, which is stronger than backward privacy proposed by Bost et al. in CCS 2017. Experimental data also demonstrate UI-SE has low computational overhead, low local disk usage, and high update performance on scalable datasets.
Expand
Douglas Wikström
ePrint Report ePrint Report
We generalize the knowledge extractor for constant-round special sound protocols presented by Wikström (2018) to a knowledge extractor for the corresponding non-interactive Fiat-Shamir proofs in the random oracle model and give an exact analysis of the extraction error and running time.

Relative the interactive case the extraction error is increased by a factor $\ell$ and the running time is increased by a factor $O(\ell)$, where $\ell$ is the number of oracle queries made by the prover.

Through carefully chosen notation and concepts, and a technical lemma, we effectively recast the extraction problem of the notoriously complex non-interactive case to the interactive case. Thus, our approach may be of independent interest.
Expand
Prastudy Fauzi, Helger Lipmaa, Janno Siim, Michal Zajac, Arne Tobias Odegaard
ePrint Report ePrint Report
An extractable one-way function (EOWF), introduced by Canetti and Dakdouk (ICALP 2008) and generalized by Bitansky et al. (SIAM Journal on Computing vol. 45), is an OWF that allows for efficient extraction of a preimage for the function. We study (generalized) EOWFs that have a public image verification algorithm. We call such OWFs verifiably-extractable and show that several previously known constructions satisfy this notion. We study how such OWFs relate to subversion zero-knowledge (Sub-ZK) NIZKs by using them to generically construct a Sub-ZK NIZK from a NIZK satisfying certain additional properties, and conversely show how to obtain them from any Sub-ZK NIZK. Prior to our work, the Sub-ZK property of NIZKs was achieved using concrete knowledge assumptions.
Expand
Ioanna Tzialla, Abhiram Kothapalli, Bryan Parno, Srinath Setty
ePrint Report ePrint Report
This paper introduces Verdict, a transparency dictionary, where an untrusted service maintains a label-value map that clients can query and update (foundational infrastructure for end-to-end encryption and other applications). To prevent unauthorized modifications to the dictionary, for example, by a malicious or a compromised service provider, Verdict produces publicly-verifiable cryptographic proofs that it correctly executes both reads and authorized updates. A key advance over prior work is that Verdict produces efficiently-verifiable proofs while incurring modest proving overheads. Verdict accomplishes this by composing indexed Merkle trees (a new SNARK-friendly data structure) with Phalanx (a new SNARK that supports amortized constant-sized proofs and leverages particular workload characteristics to speed up the prover). Our experimental evaluation demonstrates that Verdict scales to dictionaries with millions of labels while imposing modest overheads on the service and clients.
Expand
Suvradip Chakraborty, Chaya Ganesh, Mahak Pancholi, Pratik Sarkar
ePrint Report ePrint Report
We study Multi-party computation (MPC) in the setting of subversion, where the adversary tampers with the machines of honest parties. Our goal is to construct actively secure MPC protocols where parties are corrupted adaptively by an adversary (as in the standard adaptive security setting), and in addition, honest parties' machines are compromised. The idea of reverse firewalls (RF) was introduced at EUROCRYPT'15 by Mironov and Stephens-Davidowitz as an approach to protecting protocols against corruption of honest parties' devices. Intuitively, an RF for a party $\mathcal{P}$ is an external entity that sits between $\mathcal{P}$ and the outside world and whose scope is to sanitize $\mathcal{P}$’s incoming and outgoing messages in the face of subversion of their computer. Mironov and Stephens-Davidowitz constructed a protocol for passively-secure two-party computation. At CRYPTO'20, Chakraborty, Dziembowski and Nielsen constructed a protocol for secure computation with firewalls that improved on this result, both by extending it to multi-party computation protocol, and considering active security in the presence of static corruptions. In this paper, we initiate the study of RF for MPC in the adaptive setting. We put forward a definition for adaptively secure MPC in the reverse firewall setting, explore relationships among the security notions, and then construct reverse firewalls for MPC in this stronger setting of adaptive security. We also resolve the open question of Chakraborty, Dziembowski and Nielsen by removing the need for a trusted setup in constructing RF for MPC. Towards this end, we construct reverse firewalls for adaptively secure augmented coin tossing and adaptively secure zero-knowledge protocols and obtain a constant round adaptively secure MPC protocol in the reverse firewall setting without setup. Along the way, we propose a new multi-party adaptively secure coin tossing protocol in the plain model, that is of independent interest.
Expand

21 September 2021

Yi Wang, Rongmao Chen, Xinyi Huang, Jianting Ning, Baosheng Wang, Moti Yung
ePrint Report ePrint Report
Our context is anonymous encryption schemes hiding their receiver, but in a setting which allows authorities to reveal the receiver when needed. While anonymous Identity-Based Encryption (IBE) is a natural candidate for such fair anonymity (it gives trusted authority access by design), the de facto security standard (a.k.a. IND-ID-CCA) is incompatible with the ciphertext rerandomizability which is crucial to anonymous communication. Thus, we seek to extend IND-ID-CCA security for IBE to a notion that can be meaningfully relaxed for rerandomizability while it still protects against active adversaries. To the end, inspired by the notion of replayable adaptive chosen-ciphertext attack (RCCA) security (Canetti et al., Crypto'03), we formalize a new security notion called Anonymous Identity-Based RCCA (ANON-ID-RCCA) security for rerandomizable IBE and propose the first construction with rigorous security analysis. The core of our scheme is a novel extension of the double-strand paradigm, which was originally proposed by Golle et al. (CT-RSA'04) and later extended by Prabhakaran and Rosulek (Crypto'07), to the well-known Gentry-IBE (Eurocrypt'06). Notably, our scheme is the first IBE that simultaneously satisfies adaptive security, rerandomizability, and recipient-anonymity to date. As the application of our new notion, we design a new universal mixnet in the identity-based setting that does not require public key distribution (with fair anonymity). More generally, our new notion is also applicable to most existing rerandomizable RCCA-secure applications to eliminate the need for public key distribution infrastructure while allowing fairness.
Expand
Jelle Vos, Zekeriya Erkin, Christian Doerr
ePrint Report ePrint Report
In their pursuit to maximize their return on investment, cybercriminals will likely reuse as much as possible between their campaigns. Not only will the same phishing mail be sent to tens of thousands of targets, but reuse of the tools and infrastructure across attempts will lower their costs of doing business. This reuse, however, creates an effective angle for mitigation, as defenders can recognize domain names, attachments, tools, or systems used in a previous compromisation attempt, significantly increasing the cost to the adversary as it would become necessary to recreate the attack infrastructure each time.

However, generating such cyber threat intelligence (CTI) is resource-intensive, so organizations often turn to CTI providers that commercially sell feeds with such indicators. As providers have different sources and methods to obtain their data, the coverage and relevance of feeds will vary for each of them. To cover the multitude of threats one organization faces, they are best served by obtaining feeds from multiple providers. However, these feeds may overlap, causing an organization to pay for indicators they already obtained through another provider.

This paper presents a privacy-preserving protocol that allows an organization to query the databases of multiple data providers to obtain an estimate of their total coverage without revealing the data they store. In this way, a customer can make a more informed decision on their choice of CTI providers. We implement this protocol in Rust to validate its performance experimentally: When performed between three CTI providers who collectively have 20,000 unique indicators, our protocol takes less than 6 seconds to execute. The code for our experiments is freely available.
Expand
Thomas Attema, Serge Fehr
ePrint Report ePrint Report
In many occasions, the knowledge error $\kappa$ of an interactive proof is not small enough, and thus needs to be reduced. This can be done generically by repeating the interactive proof in parallel. While there have been many works studying the effect of parallel repetition on the soundness error} of interactive proofs and arguments, the effect of parallel repetition on the knowledge error has largely remained unstudied. Only recently it was shown that the $t$-fold parallel repetition of any interactive protocol reduces the knowledge error from $\kappa$ down to $\kappa^t +\nu$ for any non-negligible term $\nu$. This generic result is suboptimal in that it does not give the knowledge error $\kappa^t$ that one would expect for typical protocols, and, worse, the knowledge error remains non-negligible.

In this work we show that indeed the $t$-fold parallel repetition of any $(k_1,\dots,k_{\mu})$-special-sound multi-round public-coin interactive proof optimally reduces the knowledge error from $\kappa$ down to $\kappa^t$. At the core of our results is an alternative, in some sense more fine-grained, measure of quality of a dishonest prover than its success probability, for which we show that it characterizes when knowledge extraction is possible. This new measure then turns out to be very convenient when it comes to analyzing the parallel repetition of such interactive proofs.

While parallel repetition reduces the knowledge error, it is easily seen to increase the completeness error. For this reason, we generalize our result to the case of $s$-out-of-$t$ threshold parallel repetition, where the verifier accepts a claim if $s$-out-of-$t$ of the parallel instances are accepting. An appropriately chosen threshold $s$ allows both the knowledge error and completeness error to be reduced simultaneously.
Expand
Shun Watanabe, Kenji Yasunaga
ePrint Report ePrint Report
We introduce a novel framework for quantifying the bit security of security games. Our notion is defined with an operational meaning that a $\lambda$-bit secure game requires a total computational cost of $2^\lambda$ for winning the game with high probability, e.g., 0.99. We define the bit security both for search-type and decision-type games. Since we identify that these two types of games should be structurally different, we treat them differently but define the bit security using the unified framework to guarantee the same operational interpretation. The key novelty of our notion of bit security is to employ two types of adversaries: inner adversary and outer adversary. While the inner adversary plays a ``usual'' security game, the outer adversary invokes the inner adversary many times to amplify the winning probability for the security game. We find from our framework that the bit security for decision games can be characterized by the information measure called the R\'enyi divergence of order $1/2$ of the inner adversary. The conventional ``advantage,'' defined as the probability of winning the game, characterizes our bit security for search-type games. We present several security reductions in our framework for justifying our notion of bit security. Many of our results quantitatively match the results for the bit security notion proposed by Micciancio and Walter in 2018. In this sense, our bit security strengthens the previous notion of bit security by adding an operational meaning. A difference from their work is that, in our framework, the Goldreich-Levin theorem gives an optimal reduction only for ``balanced'' adversaries who output binary values in a balanced manner.
Expand
S. Dov Gordon, Jonathan Katz, Mingyu Liang, Jiayu Xu
ePrint Report ePrint Report
In the shuffle model for differential privacy, users locally randomize their data and then submit the results to a trusted ``shuffler'' who mixes the responses before sending them to a server for analysis. This is a promising model for real-world applications of differential privacy, as a series of recent results have shown that, in some cases, the shuffle model offers a strictly better privacy/utility tradeoff than what is possible in a purely local model. The recent ``privacy blanket'' notion provides a simple method for analyzing differentially private protocols in the shuffle model.

A downside of the shuffle model is its reliance on a trusted shuffling mechanism, and it is natural to try to replace this with a distributed shuffling protocol run by the users themselves. Unfortunately, with only one exception, existing fully secure shuffling protocols require $\Omega(n^2)$ communication. In this work, we put forth a notion of differential obliviousness for shuffling protocols, and prove that this notion provides the necessary guarantees for the privacy blanket, without requiring a trusted shuffler. We also show a differentially oblivious shuffling protocol based on onion routing that can tolerate any constant fraction of corrupted users and requires only $O(n \log n)$ communication.
Expand
Zeyu Liu, Eran Tromer
ePrint Report ePrint Report
Anonymous message delivery systems, such as private messaging services and privacy-preserving payment systems, need a mechanism for recipients to retrieve the messages addressed to them, without leaking metadata or letting their messages be linked. Recipients could download all posted messages and scan for those addressed to them, but communication and computation costs are excessive at scale.

We show how untrusted servers can detect messages on behalf of recipients, and summarize these into a compact encrypted digest that recipients can easily decrypt. These servers operate obliviously and do not learn anything about which messages are addressed to which recipients. Privacy, soundness, and completeness hold even if everyone but the recipient is adversarial and colluding (unlike in prior schemes), and are post-quantum secure.

Our starting point is an asymptotically-efficient approach, using Fully Homomorphic Encryption and homomorphically-encoded Sparse Random Linear Codes. We then address the concrete performance using a bespoke tailoring of lattice-based cryptographic components, alongside various algebraic and algorithmic optimizations. This reduces the digest size to a few bits per message scanned. Concretely, the servers' cost is a couple of USD per million messages scanned, and the resulting digests can be decoded by recipients in under 20ms. Our schemes can thus practically attain the strongest form of receiver privacy for current applications such as privacy-preserving cryptocurrencies.
Expand
Elena Kirshanova, Alexander May
ePrint Report ePrint Report
Let $As = b + e \bmod q$ be an LWE-instance with ternary keys $s,e \in \{0, \pm 1\}^n$. Let $s$ be taken from a search space of size $\mathcal{S}$. A standard Meet-in-the-Middle attack recovers $s$ in time $\mathcal{S}^{0.5}$. Using the representation technique, a recent improvement of May shows that this can be lowered to approximately $\mathcal{S}^{0.25}$ by guessing a sub-linear number of $\Theta(\frac{n}{\log n})$ coordinates from $e$. While guessing such an amount of $e$ can asymptotically be neglected, for concrete instantiations of e.g. NTRU, BLISS or GLP the additional cost of guessing leads to complexities around $\mathcal{S}^{0.3}$.

We introduce a locality sensitive hashing (LSH) technique based on Odlyzko's work that avoids any guessing of $e$'s coordinates. This LSH technique involves a comparably small cost such that we can significantly improve on previous results, pushing complexities towards the asymptotic bound $\mathcal{S}^{0.25}$. Concretely, using LSH we lower the MitM complexity estimates for the currently suggested NTRU and NTRU Prime instantiations by a factor in the range $2^{20}-2^{49}$, and for BLISS and GLP parameters by a factor in the range $2^{18}-2^{41}$.
Expand
Chris Peikert, Zachary Pepin, Chad Sharp
ePrint Report ePrint Report
Vector commitment (VC) schemes allow one to commit concisely to an ordered sequence of values, so that the values at desired positions can later be proved concisely. In addition, a VC can be statelessly updatable, meaning that commitments and proofs can be updated to reflect changes to individual entries, using knowledge of just those changes (and not the entire vector). VCs have found important applications in verifiable outsourced databases, cryptographic accumulators, and cryptocurrencies. However, to date there have been relatively few post-quantum constructions, i.e., ones that are plausibly secure against quantum attacks.

More generally, functional commitment (FC) schemes allow one to concisely and verifiably reveal various functions of committed data, such as linear functions (i.e., inner products, including evaluations of a committed polynomial). Under falsifiable assumptions, all known functional commitments schemes have been limited to ``linearizable'' functions, and there are no known post-quantum FC schemes beyond ordinary VCs.

In this work we give post-quantum constructions of vector and functional commitments based on the standard Short Integer Solution lattice problem (appropriately parameterized): \begin{itemize} \item First, we present new statelessly updatable VCs with significantly shorter proofs than (and efficiency otherwise similar to) the only prior post-quantum, statelessly updatable construction (Papamanthou \etal, EUROCRYPT 13). Our constructions use private-key setup, in which an authority generates public parameters and then goes offline.

\item Second, we construct functional commitments for \emph{arbitrary (bounded) Boolean circuits} and branching programs. Under falsifiable assumptions, this is the first post-quantum FC scheme beyond ordinary VCs, and the first FC scheme of any kind that goes beyond linearizable functions. Our construction works in a new model involving an authority that generates the public parameters and remains online to provide public, reusable ``opening keys'' for desired functions of committed messages. \end{itemize}
Expand
Manuel Barbosa, Gilles Barthe, Xiong Fan, Benjamin Grégoire, Shih-Han Hung, Jonathan Katz, Pierre-Yves Strub, Xiaodi Wu, Li Zhou
ePrint Report ePrint Report
EasyCrypt is a formal verification tool used extensively for formalizing concrete security proofs of cryptographic constructions. However, the EasyCrypt formal logics consider only classical attackers, which means that post-quantum security proofs cannot be formalized and machine-checked with this tool. In this paper we prove that a natural extension of the EasyCrypt core logics permits capturing a wide class of post-quantum cryptography proofs, settling a question raised by (Unruh, POPL 2019). Leveraging our positive result, we implement EasyPQC, an extension of EasyCrypt for post-quantum security proofs, and use EasyPQC to verify post-quantum security of three classic constructions: PRF-based MAC, Full Domain Hash and GPV08 identity-based encryption.
Expand

20 September 2021

Award Award
The deadline for nominating IACR members for the 2021 IACR Fellows class is November 15th.

The IACR Fellows Program recognizes outstanding IACR members for technical and professional contributions to the field of cryptology.

Information about nominating a Fellow is available here.
Expand
Andrea Caforio, Fatih Balli, Subhadeep Banik
ePrint Report ePrint Report
Public knowledge about the structure of a cryptographic system is a standard assumption in the literature and algorithms are expected to guarantee security in a setting where only the encryption key is kept secret. Nevertheless, undisclosed proprietary cryptographic algorithms still find widespread use in applications both in the civil and military domains. Even though side-channel-based reverse engineering attacks that recover the hidden components of custom cryptosystems have been demonstrated for a wide range of constructions, the complete and practical reverse engineering of AES-128-like ciphers remains unattempted. In this work, we close this gap and propose the first practical reverse engineering of AES-128-like custom ciphers, i.e., algorithms that deploy undisclosed SubBytes, ShiftRows and MixColumns functions. By performing a side-channel-assisted differential power analysis, we show that the amount of traces required to fully recover the undisclosed components are relatively small, hence the possibility of a side-channel attack remains as a practical threat. The results apply to both 8-bit and 32-bit architectures and were validated on two common microcontroller platforms.
Expand
◄ Previous Next ►