International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

28 January 2020

Weikeng Chen, Raluca Ada Popa
ePrint Report ePrint Report
File-sharing systems like Dropbox offer insufficient privacy because a compromised server can see the file contents in the clear. Although encryption can hide such contents from the servers, metadata leakage remains significant. The goal of our work is to develop a file-sharing system that hides metadata---including user identities and file access patterns.

Metal is the first file-sharing system that hides such metadata from malicious users and that has a latency of only a few seconds. The core of Metal consists of a new two-server multi-user oblivious RAM (ORAM) scheme, which is secure against malicious users, a metadata-hiding access control protocol, and a capability sharing protocol. Compared with the state-of-the-art malicious-user file-sharing scheme PIR-MCORAM (Maffei et al.'17), which does not hide user identities, Metal hides the user identities and is 500x faster (in terms of amortized latency) or 10^5x faster (in terms of worst-case latency).
Expand
Anand Aiyer, Xiao Liang, Nilu Nalini, Omkant Pandey
ePrint Report ePrint Report
The established bounds on the round-complexity of (black-box) concurrent zero-knowledge (cZK) consider adversarial verifiers with complete control over the scheduling of messages of different sessions. Consequently, such bounds only represent a \emph{worst} case study of concurrent schedules, forcing $\widetilde{\Omega}(\log n)$ rounds for \emph{all} protocol sessions. What happens in ``average'' cases against random schedules? Must all sessions still suffer large number of rounds?

Rosen and Shelat first considered such possibility, and constructed a cZK protocol that adjusts its round-complexity based on existing network conditions. While they provide experimental evidence for its average-case performance, no provable guarantees are known.

In general, a proper framework for studying and understanding the average-case schedules for cZK is missing. We present the first theoretical framework for performing such average-case studies. Our framework models the network as a stochastic process where a new session is opened with probability $p$ or an existing session receives the next message with probability $1-p$; the existing session can be chosen either in a first-in-first-out (FIFO) or last-in-first-out (LIFO) order. These two orders are fundamental and serve as good upper and lower bounds for other simple variations.

We also develop methods for establishing provable average-case bounds for cZK in these models. The bounds in these models turn out to be intimately connected to various properties of one-dimensional random walks that reflect at the origin. Consequently, we establish new and tight asymptotic bounds for such random walks, including: expected rate of return-to-origin, changes of direction, and concentration of ``positive'' movements. These results may be of independent interest.

Our analysis shows that the Rosen-Shelat protocol is highly sensitive to even moderate network conditions, resulting in a large fraction of non-optimal sessions. We construct a more robust protocol by generalizing the ``footer-free'' condition of Rosen-Shelat which leads to significant improvements for both FIFO and LIFO models.
Expand
Justin Drake, Ariel Gabizon
ePrint Report ePrint Report
We present an enhanced version of the Kate, Zaverucha and Goldberg polynomial commitment scheme [KZG, ASIACRYPT 2010] where a single group element can be an opening proof for multiple polynomials each evaluated at a different arbitrary subset of points. As a sample application we ``plug in'' this scheme into the PLONK proving system[GWC, 2019] to obtain improved proof size and prover run time at the expense of additional verifier ${\mathbb{G}}_2$ operations and pairings, and additional ${\mathbb{G}}_2$ SRS elements.

We also present a second scheme where the proof consists of two group elements and the verifier complexity is better than previously known batched verification methods for [KZG].
Expand
Benny Applebaum, Amos Beimel, Oded Nir, Naty Peter
ePrint Report ePrint Report
A secret-sharing scheme allows to distribute a secret $s$ among $n$ parties such that only some predefined ``authorized'' sets of parties can reconstruct the secret, and all other ``unauthorized'' sets learn nothing about $s$. The collection of authorized sets is called the access structure. For over 30 years, it was known that any (monotone) collection of authorized sets can be realized by a secret-sharing scheme whose shares are of size $2^{n-o(n)}$ and until recently no better scheme was known. In a recent breakthrough, Liu and Vaikuntanathan (STOC 2018) have reduced the share size to $2^{0.994n+o(n)}$, which was later improved to $2^{0.892n+o(n)}$ by Applebaum et al. (EUROCRYPT 2019).

In this paper we improve the exponent of general secret-sharing down to $0.637$. For the special case of linear secret-sharing schemes, we get an exponent of $0.762$ (compared to $0.942$ of Applebaum et al.).

As our main building block, we introduce a new \emph{robust} variant of conditional disclosure of secrets (robust CDS) that achieves unconditional security even under limited form of re-usability. We show that the problem of general secret-sharing reduces to robust CDS with sub-exponential overhead and derive our main result by implementing robust CDS with a non-trivial exponent. The latter construction follows by presenting a general immunization procedure that turns standard CDS into a robust CDS.
Expand
San Antonio, United States, 9 August - 11 August 2020
Event Calendar Event Calendar
Event date: 9 August to 11 August 2020
Submission deadline: 1 May 2020
Notification: 22 June 2020
Expand
Lyngby , Denmark, 28 July 2020
Event Calendar Event Calendar
Event date: 28 July 2020
Submission deadline: 17 February 2020
Notification: 23 March 2020
Expand

27 January 2020

University of Lyon, CNRS, Saint-Etienne, France - Laboratoire Hubert Curien
Job Posting Job Posting
The Hubert Curien laboratory is a joint research unit of the University of Lyon, Saint-Etienne, the National Research Centre "CNRS". Its Secure Embedded Systems & Hardware Architectures (SESAM) Group is one of the leading European research groups in the areas of hardware security. The SESAM group of the Hubert Curien Lab explores three main aspects of hardware security: - the random number generation and physical unclonable function implementation in logic devices, including design, characterization, test and security evaluation - the design of hardware architectures resistant to passive and active physical attacks, - the security of systems on chip (SoC) This group offer a post-doc research position to work on one of these three aspects of hardware security. We are looking for an excellent candidate with PhD and track record in hardware security. The Post-Doc position will start as soon as possible to finish at the end of March 2021 (no-flexible ending date), it could extend for one more year in function of the scientific work produced during the first year.

Closing date for applications:

Contact: To apply please send to Prof. L. Bossuet your detailed CV (with publication list), motivation for applying (1 page) and names of at least two people who can provide reference letters (e-mail).

More information: https://laboratoirehubertcurien.univ-st-etienne.fr/en/teams/secure-embedded-systems-hardware-architectures.html.

Expand

26 January 2020

Eman Salem Alashwali, Pawel Szalachowski, Andrew Martin
ePrint Report ePrint Report
If two or more identical HTTPS clients, located at different geographic locations (regions), make an HTTPS request to the same domain (e.g. example.com), on the same day, will they receive the same HTTPS security guarantees in response? Our results give evidence that this is not always the case. We conduct scans for the top 250000 most visited domains on the Internet, from clients located at five different regions: Australia, Brazil, India, the UK, and the US. Our scans gather data from both application (URLs and HTTP headers) and transport (servers' selected TLS version, ciphersuite, and certificate) layers. Overall, we find that HTTPS inconsistencies at the application layer are higher than those at the transport layer. We also find that HTTPS security inconsistencies are strongly related to URLs and IPs diversity among regions, and to a lesser extent to the presence of redirections. Further manual inspection shows that there are several reasons behind URLs diversity among regions such as downgrading to the plain-HTTP protocol, using different subdomains, different TLDs, or different home page documents. Furthermore, we find that downgrading to plain-HTTP is related to websites' regional blocking. We also provide attack scenarios that show how an attacker can benefit from HTTPS security inconsistencies, and introduce a new attack scenario which we call the "region confusion" attack. Finally, based on our observations, we draw some recommendations including the need for testing tools for domain administrators and users that help to mitigate and detect regional domains' inconsistencies, standardising regional domains format with the same-origin policy (of domains) in mind, standardising secure URL redirections, and avoid redirections whenever possible.
Expand
Kentaro Tamura, Yutaka Shikano
ePrint Report ePrint Report
Quantum random number generators (QRNGs) produce theoretically unpredictable random numbers. A typical QRNG is implemented in quantum optics [Herrero-Collantes, M., Garcia-Escartin, J. C.: Quantum Random Number Generators. Rev. Mod. Phys. \textbf{89}, 015004 (2017)]. Quantum computers become QRNGs when given certain programs. The simplest example of such a program applies the Hadamard gate on all qubits and performs measurement. As a result of repeatedly running this program on a 20-qubit superconducting quantum computer (IBM 20Q Tokyo), we obtained a sample with a length of 43,560. However, statistical analysis showed that this sample was biased and correlated. One of the post-processed samples passed statistical tests. To show the effectiveness of post-processing, a larger sample size is required. The present study of quantum random number generation and statistical testing may provide a potential candidate for benchmarking tests of actual quantum computing devices.
Expand
Thomas Häner, Samuel Jaques, Michael Naehrig, Martin Roetteler, Mathias Soeken
ePrint Report ePrint Report
We present improved quantum circuits for elliptic curve scalar multiplication, the most costly component in Shor's algorithm to compute discrete logarithms in elliptic curve groups. We optimize low-level components such as reversible integer and modular arithmetic through windowing techniques and more adaptive placement of uncomputing steps, and improve over previous quantum circuits for modular inversion by reformulating the binary Euclidean algorithm. Overall, we obtain an affine Weierstrass point addition circuit that has lower depth and uses fewer T gates than previous circuits. While previous work mostly focuses on minimizing the total number of qubits, we present various trade-offs between different cost metrics including the number of qubits, circuit depth and T-gate count. Finally, we provide a full implementation of point addition in the Q# quantum programming language that allows unit tests and automatic quantum resource estimation for all components.
Expand
Charbel Saliba, Laura Luzzi, Cong Ling
ePrint Report ePrint Report
We consider a key encapsulation mechanism (KEM) based on ring-LWE where reconciliation is performed on an $N$-dimensional lattice using Wyner-Ziv coding. More precisely, we consider Barnes-Wall lattices and use Micciancio and Nicolosi's bounded distance decoder with polynomial complexity $\mathcal{O}(N \log^2(N))$. We show that in the asymptotic regime for large $N$, the achievable key rate is $\Theta(\log N)$ bits per dimension, while the error probability $P_e$ vanishes exponentially in $N$. Unlike previous works, our scheme does not require a dither.
Expand
Rishiraj Bhattacharyya
ePrint Report ePrint Report
The efficiency of a black-box reduction is an important goal of modern cryptography. Traditionally, the time complexity and the success probability were considered as the main aspects of efficiency measurements. In CRYPTO 2017, Auerbach et al. introduced the notion of memory-tightness in cryptographic reductions and showed a memory-tight reduction of the existential unforgeability of the RSA-FDH signature scheme. Unfortunately, their techniques do not extend directly to the reductions involving intricate RO-programming. The problem seems to be inherent as all the other existing results on memory-tightness are lower bounds and impossibility results. In fact, Auerbach et al. conjectured that a memory-tight reduction for IND-CCA security of Hashed-ElGamal KEM is impossible.   -We refute the above conjecture. Using a simple RO simulation technique, we provide memory-tight reductions of IND-CCA  security of the Cramer-Shoup and the ECIES version of Hashed-ElGamal KEM.  

- We prove memory-tight reductions for different variants of Fujisaki-Okamoto Transformation. We analyze the modular transformations introduced by  Hofheinz,  H\"{o}vermanns and Kiltz (TCC 2017). In addition to the constructions involving implicit rejection, we present a memory-tight reduction for the IND-CCA  security of the transformation ${\mbox{QFO}_m^\perp}$. Our techniques can withstand correctness-errors, and applicable to several lattice-based KEM candidates.
Expand
Daniel R. L. Brown
ePrint Report ePrint Report
A nothing-up-my-sleeve number is a cryptographic constant, such as a field size in elliptic curve cryptography, with qualities to assure users against subversion of the number by the system designer. A number with low Kolmogorov descriptional complexity resists being subverted to the extent that the speculated subversion would leave a trace that cannot be hidden within the short description. The roll programming language, a version of Godel's 1930s definition of computability, can somewhat objectively quantify low descriptional complexity, a nothing-up-my-sleeve quality, of a number. For example, $(2^{127}-1)^2$ and $2^{255}-19$ can be described with roll programs of 58 and 84 words.
Expand
Fabio Banfi, Ueli Maurer
ePrint Report ePrint Report
We study anonymity of probabilistic encryption (pE) and probabilistic authenticated encryption (pAE). We start by providing concise game-based security definitions capturing anonymity for both pE and pAE, and then show that the commonly used notion of indistinguishability from random ciphertexts (IND\$) indeed implies the anonymity notions for both pE and pAE. This is in contrast to a recent work of Chan and Rogaway (Asiacrypt 2019), where it is shown that IND\$-secure nonce-based authenticated encryption can only achieve anonymity if a sophisticated transformation is applied. Moreover, we also show that the Encrypt-then-MAC paradigm is anonymity-preserving, in the sense that if both the underlying probabilistic MAC (pMAC) and pE schemes are anonymous, then also the resulting pAE scheme is. Finally, we provide a composable treatment of anonymity using the constructive cryptography framework of Maurer and Renner (ICS 2011). We introduce adequate abstractions modeling various kinds of anonymous communication channels for many senders and one receiver in the presence of an active man-in-the-middle adversary. Then we show that the game-based notions indeed are anonymity-preserving, in the sense that they imply constructions between such anonymous channels, thus generating authenticity and/or confidentiality as expected, but crucially retaining anonymity if present.
Expand

23 January 2020

Ben Kreuter, Tancrede Lepoint, Michele Orru, Mariana Raykova
ePrint Report ePrint Report
We present a cryptographic construction for anonymous tokens with private metadata bit, a primitive that enables an issuer to provide a user with anonymous trust tokens that can embed a single private metadata bit, which is accessible only to the party who holds the secret authority key and is private with respect to anyone else. Our construction provides unforgeability, unlinkability and privacy for the metadata bit properties.
Expand
Dimitrios Sikeridis, Panos Kampanakis, Michael Devetsikiotis
ePrint Report ePrint Report
The potential development of large-scale quantum computers is raising concerns among IT and security research professionals due to their ability to solve (elliptic curve) discrete logarithm and integer factorization problems in polynomial time. All currently used public key algorithms would be deemed insecure in a post-quantum (PQ) setting. In response, the National Institute of Standards and Technology (NIST) has initiated a process to standardize quantum-resistant crypto algorithms, focusing primarily on their security guarantees. Since PQ algorithms present significant differences over classical ones, their overall evaluation should not be performed out-of-context. This work presents a detailed performance evaluation of the NIST signature algorithm candidates and investigates the imposed latency on TLS 1.3 connection establishment under realistic network conditions. In addition, we investigate their impact on TLS session throughput and analyze the trade-off between lengthy PQ signatures and computationally heavy PQ cryptographic operations.

Our results demonstrate that the adoption of at least two PQ signature algorithms would be viable with little additional overhead over current signature algorithms. Also, we argue that many NIST PQ candidates can effectively be used for less time-sensitive applications, and provide an in-depth discussion on the integration of PQ authentication in encrypted tunneling protocols, along with the related challenges, improvements, and alternatives. Finally, we propose and evaluate the combination of different PQ signature algorithms across the same certificate chain in TLS. Results show a reduction of the TLS handshake time and a significant increase of a server's TLS tunnel connection rate over using a single PQ signature scheme.
Expand
Thomas Agrikola, Dennis Hofheinz, Julia Kastner
ePrint Report ePrint Report
We provide a standard-model implementation (of a relaxation) of the algebraic group model (AGM, [Fuchsbauer, Kiltz, Loss, CRYPTO 2018]). Specifically, we show that every algorithm that uses our group is algebraic, and hence ``must know'' a representation of its output group elements in terms of its input group elements. Here, ``must know'' means that a suitable extractor can extract such a representation efficiently. We stress that our implementation relies only on falsifiable assumptions in the standard model, and in particular does not use any knowledge assumptions.

As a consequence, our group allows to transport a number of results obtained in the AGM into the standard model, under falsifiable assumptions. For instance, we show that in our group, several Diffie-Hellman-like assumptions (including computational Diffie-Hellman) are equivalent to the discrete logarithm assumption. Furthermore, we show that our group allows to prove the Schnorr signature scheme tightly secure in the random oracle model.

Our construction relies on indistinguishability obfuscation, and hence should not be considered as a practical group itself. However, our results show that the AGM is a realistic computational model (since it can be instantiated in the standard model), and that results obtained in the AGM are also possible with standard-model groups.
Expand
Dima Grigoriev, Vladimir Shpilrain
ePrint Report ePrint Report
A blockchain is redactable if a private key holder (e.g. a central authority) can change any single block without violating integrity of the whole blockchain, but no other party can do that. In this paper, we offer a simple method of constructing redactable blockchains inspired by the ideas underlying the well-known RSA encryption scheme. Notably, our method can be used in conjunction with any reasonable hash function that is used to build a blockchain. Public immutability of a blockchain in our construction is based on the computational hardness of the RSA problem and not on properties of the underlying hash function. Corruption resistance is based on the computational hardness of the discrete logarithm problem.
Expand
Pranab Chakraborty, Subhamoy Maitra
ePrint Report ePrint Report
In this note we provide a theoretical argument towards an unsolved question related to Mantin's Digraph Repetition Bias (2005) that is observed in the key-stream of RC4. The open question, that depends on the observation that arrival of four consecutive same bytes in RC4 key-stream is slightly negatively biased, was posed by Bricout et al [Des. Codes Cryptogr. (2018) 86:743-770] in 2016.
Expand
Taylor R Campbell
ePrint Report ePrint Report
We present Daence, a deterministic authenticated cipher based on a pseudorandom function family and a universal hash family, similar to SIV. We recommend instances with Salsa20 or ChaCha, and Poly1305, for high performance, high security, and easy deployment.
Expand
◄ Previous Next ►