International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

17 November 2021

Xavier Bultel
ePrint Report ePrint Report
A Posteriori Openable Public Key Encryptions (APOPKE) allow any user to generate a constant-size key that decrypts the messages they have sent over a chosen period of time. As an important feature, the period can be dynamically chosen after the messages have been sent. This primitive was introduced in 2016 by Bultel and Lafourcade. They also defined the Chosen-Plaintext Attack (CPA) security for APOPKE, and designed a scheme called GAPO, which is CPA secure in the random oracle model. In this paper, we formalize the Chosen-Ciphertext Attack (CCA) security for APOPKE, then we design a scheme called CHAPO (for CHosen-ciphetext attack resistant A Posteriori Openable encryption), and we prove its CCA security in the standard model. CHAPO is approximately twice as efficient as GAPO and is more generic. We also give news applications, and discuss the practical impact of its CCA security.
Expand
Nico Döttling, Vipul Goyal, Giulio Malavolta, Justin Raizes
ePrint Report ePrint Report
In this work we consider the following question: What is the cost of security for multi-party protocols? Specifically, given an insecure protocol where parties exchange (in the worst case) $\Gamma$ bits in $N$ rounds, is it possible to design a secure protocol with communication complexity close to $\Gamma$ and $N$ rounds? We systematically study this problem in a variety of settings and we propose solutions based on the intractability of different cryptographic problems. For the case of two parties we design an interaction-preserving compiler where the number of bits exchanged in the secure protocol approaches $\Gamma$ and the number of rounds is exactly $N$, assuming the hardness of standard problems over lattices. For the more general multi-party case, we obtain the same result assuming either (i) an additional round of interaction or (ii) the existence of extractable witness encryption and succinct non-interactive arguments of knowledge. As a contribution of independent interest, we construct the first multi-key fully homomorphic encryption scheme with message-to-ciphertext ratio (i.e., rate) of $1 - o(1)$, assuming the hardness of the learning with errors (LWE) problem. We view our work as a support for the claim that, as far as interaction and communication are concerned, one does not need to pay a significant price for security in multi-party protocols.
Expand
Phil Hebborn, Baptiste Lambin, Gregor Leander, Yosuke Todo
ePrint Report ePrint Report
Integral attacks belong to the classical attack vectors against any given block ciphers. However, providing arguments that a given cipher is resistant against those attacks is notoriously difficult. In this paper, based solely on the assumption of independent round keys, we develop significantly stronger arguments than what was possible before: our main result is that we show how to argue that the sum of ciphertexts over any possible subset of plaintext is key-dependent, i.e., the non existence of integral distinguishers.
Expand
Alisa Pankova, Jan Willemson
ePrint Report ePrint Report
This paper studies quantitative relationships between privacy, verifiability, accountability, and coercion-resistance of voting protocols. We adapt existing definitions to make them better comparable with each other and determine which bounds a certain requirement on one property poses on some other property. It turns out that, in terms of proposed definitions, verifiability and accountability do not necessarily put constraints on privacy and coercion-resistance. However, the relations between these notions become more interesting in the context of particular attacks. Depending on the assumptions and the attacker's goal, voter coercion may benefit from a too weak as well as too strong verifiability.
Expand
Nicolas Alhaddad, Sisi Duan, Mayank Varia, Haibin Zhang
ePrint Report ePrint Report
Erasure coding is a key tool to reduce the space and communication overhead in fault-tolerant distributed computing. State-of-the-art distributed primitives, such as asynchronous verifiable information dispersal (AVID), reliable broadcast (RBC), multi-valued Byzantine agreement (MVBA), and atomic broadcast, all use erasure coding.

This paper introduces an erasure coding proof (ECP) system, which allows the encoder to prove succinctly and non-interactively that an erasure-coded fragment is consistent with a constant-sized commitment to the original data block. Each fragment can be verified independently of the other fragments.

Our proof system is based on polynomial commitments, with new batching techniques that may be of independent interest. To illustrate the benefits of our ECP system, we show how to build the first AVID protocol with optimal message complexity, word complexity, and communication complexity.
Expand
Valeh Farzaliyev, Jan Willemson, Jaan Kristjan Kaasik
ePrint Report ePrint Report
Mix-networks were first proposed by Chaum in the late 1970s -- early 1980s as a general tool for building anonymous communication systems. Classical mix-net implementations rely on standard public key primitives (e.g. ElGamal encryption) that will become vulnerable when a sufficiently powerful quantum computer will be built. Thus, there is a need to develop quantum-resistant mix-nets. This paper focuses on the application case of electronic voting where the number of votes to be mixed may reach hundreds of thousands or even millions. We propose an improved architecture for lattice-based post-quantum mix-nets featuring more efficient zero-knowledge proofs while maintaining established security assumptions. Our current implementation scales up to 100000 votes, still leaving a lot of room for future optimisation.
Expand
Navid Nasr Esfahani, Douglas Stinson
ePrint Report ePrint Report
All-or-nothing transforms (AONTs) were originally defined by Rivest as bijections from $s$ input blocks to $s$ output blocks such that no information can be obtained about any input block in the absence of any output block. Numerous generalizations and extensions of all-or-nothing transforms have been discussed in recent years, many of which are motivated by diverse applications in cryptography, information security, secure distributed storage, etc. In particular, $t$-AONTs, in which no information can be obtained about any $t$ input blocks in the absence of any $t$ output blocks, have received considerable study. In this paper, we study three generalizations of AONTs that are motivated by applications due to Pham et al. and Oliveira et al. We term these generalizations rectangular, range, and restricted AONTs. Briefly, in a rectangular AONT, the number of outputs is greater than the number of inputs. A range AONT satisfies the $t$-AONT property for a range of consecutive values of $t$. Finally, in a restricted AONT, the unknown outputs are assumed to occur within a specified set of "secure" output blocks. We study existence and non-existence and provide examples and constructions for these generalizations. We also demonstrate interesting connections with combinatorial structures such as orthogonal arrays, split orthogonal arrays, MDS codes and difference matrices.
Expand
Mahmoud Yehia, Riham AlTawy, T. Aaron Gulliver
ePrint Report ePrint Report
G-Merkle (GM) (PQCrypto 2018) is the first hash-based group signature scheme where it was stated that multi-tree approaches are not applicable, thus limiting the maximum number of supported signatures to $2^{20}$. DGM (ESORICS 2019) is a dynamic and revocable GM-based group signature scheme that utilizes a computationally expensive puncturable encryption for revocation and requires interaction between verfiers and the group manager for signature verification. In this paper, we propose GMMT, a hash-based group signature scheme that provides solutions to the aforementioned challenges of the two schemes. GMMT builds on GM and adopts a multi-tree construction that constructs new GM trees for new signing leaves assignment while keeping the group public key unchanged. Compared to a single GM instance which enables $2^{20}$ signature, GMMT allows growing the multi-tree structure adaptively to support $2^{64}$ signatures under the same public key. Moreover, GMMT has a revocation mechanism that attains linkable anonymity of revoked signatures and has a logarithmic verfication computational complexity compared to the linear complexity of DGM. The group manager in GMMT requires storage that is linear in the number of members while the corresponding storage in DGM is linear in the number of signatures supported by the system. Concretely, for a system that supports $2^{64}$ signatures with $2^{15}$ members and provides 256-bit security, the required storage of the group manager is 1 MB (resp. $10^{8.7}$ TB) in GMMT(resp. DGM).
Expand
Mahmoud Yehia, Riham AlTawy, T. Aaron Gulliver
ePrint Report ePrint Report
Group Merkle (GM) (PQCrypto 2018) and Dynamic Group Merkle (DGM) (ESORICS 2019) are recent proposals for post-quantum hash-based group signature schemes. They are designed as generic constructions that employ any stateful Merkle hash-based signature scheme. XMSS-T (PKC 2016, RFC8391) is the latest stateful Merkle hash-based signature scheme where (almost) optimal parameters are provided. In this paper, we show that the setup phase of both GM and DGM does not enable drop-in instantiation by XMSS-T which limits both designs in employing earlier XMSS versions with sub-optimal parameters which negatively affects the performance of both schemes. Thus, we provide a tweak to the setup phase of GM and DGM to overcome this limitation and enable the adoption of XMSS-T. Moreover, we analyze the bit security of DGM when instantiated with XMSS-T and show that it is susceptible to multi-target attacks because of the parallel Signing Merkle Trees (SMT) approach. More precisely, when DGM is used to sign 264 messages, its bit security is 44 bits less than that of XMSS-T. Finally, we provide a DGM variant that mitigates multi-target attacks and show that it attains the same bit security as XMSS-T.
Expand
Mahmoud Yehia, Riham AlTawy, T. Aaron Gulliver
ePrint Report ePrint Report
SPHINCS+ is a stateless hash-based digital signature scheme and an alternate candidate in round 3 of the NIST Post- Quantum Cryptography standardization competition. Although not considered as a finalist because of its performance, SPHINCS+ may be considered for standardization by NIST after another round of evaluations. In this paper, we propose a Verfiable Obtained Random Subsets (v-ORS) generation mechanism which with one extra hash computation binds the message with the signing FORS instance (the underlying few-time signature algorithm). This enables SPHINCS+ to offer more security against generic attacks because the proposed modication restricts the ORS generation to use a hash key from the utilized signing FORS instance. Consequently, such a modication enables the exploration of dierent parameter sets for FORS to achieve better performance at the same security level. For instance, when using v-ORS, one parameter set for SPHINCS+-256s provides 82.9% reduction in the computation cost of FORS which leads to around 27% reduction in the number of hash calls of the signing procedure. Given that NIST has identfied the performance of SPHINCS+ as its main drawback, these results are a step forward in the path to standardization.
Expand
Christopher Battarbee, Delaram Kahrobaei, Dylan Tailor, Siamak F. Shahandashti
ePrint Report ePrint Report
All instances of the semidirect key exchange protocol, a generalisation of the famous Diffie-Hellman key exchange protocol, satisfy the so-called ``telescoping equality''; in some cases, this equality has been used to construct an attack. In this report we present computational evidence suggesting that an instance of the scheme called `MOBS (Matrices Over Bitstrings)' is an example of a scheme where the telescoping equality has too many solutions to be a practically viable means to conduct an attack.
Expand

15 November 2021

Election Election
The 2021 Election for Directors of the IACR Board is now open.

You may vote as often as you wish now through November 16th using the Helios https://heliosvoting.org cryptographically-verifiable election system, but only your last vote will be counted.

Please see for a brief overview of how the Helios system works and https://www.iacr.org/elections/eVoting/ for information on the IACR decision to adopt Helios.

2021 members of the IACR (generally people who attended an IACR event in 2020) should shortly receive, or have already received, voting credentials from system@heliosvoting.org sent to their email address of record with the IACR. Please check your spam folder first if you believe that you haven't received the mail. Questions about this election may be sent to elections@iacr.org.

Information about the candidates can be found below and also at https://iacr.org/elections/2021/candidates.php.
Expand
Jean-Pierre Münch, Thomas Schneider, Hossein Yalame
ePrint Report ePrint Report
The symmetric cryptographic primitive of choice today is AES. Its security is well-studied and hardware acceleration is available on a variety of platforms. Following the success of AES and the 128-bit AES-NI instructions for it, Intel has extended the x86 instruction set with Vector AES instructions. For the first time, we evaluate the performance impact that these instructions have on complex AES processing beyond bulk encryption. In particular, we focus on the area of secure multi-party computation where AES calls are either independent, allowing easy use of VAES for full speed-up, or where the AES calls are dependent on the results of previous AES evaluations. For independent calls, we evaluate the performance impact using Microsoft CrypTFlow2 and the EMP-OT library, both of which primarily use AES in counter mode. For dependent calls, we evaluate the performance impact using the ABY framework and the EMP-AGMPC framework. To get optimal efficiency from the hardware, enough independent calls need to be combined for each batch of AES executions. We identify such batches using a deferred execution technique paired with early execution to reduce non-locality issues and more static techniques using circuit depth and explicit gate independence. We present a performance and a modularity-focused technique to compute the AES operations efficiently while also immediately using the results and preparing the inputs. Using these manually implemented techniques, we achieve a performance improvement via VAES of up to 244% for ABY and of up to 28% for EMP-AGMPC. With our additional, alternative garbling schemes, we achieve up to 171% better performance for ABY through the use of VAES. Additionally, our evaluations show overall performance benefits of up to 24% for EMP-OT.
Expand
Feng Hao, Paul C. van Oorschot
ePrint Report ePrint Report
Password-authenticated key exchange (PAKE) is a major area of cryptographic protocol research and practice. Many PAKE proposals have emerged in the 30 years following the original 1992 Encrypted Key Exchange (EKE), some accompanied by new theoretical models to support rigorous analysis. To reduce confusion and encourage practical development, major standards bodies including IEEE, ISO/IEC and the IETF have worked towards standardizing PAKE schemes, with mixed results. Challenges have included contrasts between heuristic protocols and schemes with security proofs, and subtleties in the assumptions of such proofs rendering some schemes unsuitable for practice. Despite initial difficulty identifying suitable use cases, the past decade has seen PAKE adoption in numerous large-scale applications such as Wi-Fi, Apple's iCloud, browser synchronization, e-passports, and the Thread network protocol for Internet of Things devices. Given this backdrop, we consolidate three decades of knowledge on PAKE protocols, integrating theory, practice, standardization and real-world experience. We provide a thorough and systematic review of the field, a summary of the state-of-the-art, a taxonomy to categorize existing protocols, and a comparative analysis of protocol performance using representative schemes from each taxonomy category. We also review real-world applications, summarize lessons learned, and highlight open research problems related to PAKE protocols.
Expand
Luca Notarnicola, Gabor Wiese
ePrint Report ePrint Report
We consider the problem of revealing a small hidden lattice from the knowledge of a low-rank sublattice modulo a given sufficiently large integer – the Hidden Lattice Problem. A central motivation of study for this problem is the Hidden Subset Sum Problem, whose hardness is essentially determined by that of the hidden lattice problem. We describe and compare two algorithms for the hidden lattice problem: we first adapt the algorithm by Nguyen and Stern for the hidden subset sum problem, based on orthogonal lattices, and propose a new variant, which we explain to be related by duality in lattice theory. Following heuristic, rigorous and practical analyses, we find that our new algorithm brings some advantages as well as a competitive al- ternative for algorithms for problems with cryptographic interest, such as Approximate Common Divisor Problems, and the Hidden Subset Sum Problem. Finally, we study variations of the problem and highlight its relevance to cryptanalysis.
Expand
Erik Anderson, Melissa Chase, F. Betul Durak, Esha Ghosh, Kim Laine, Chenkai Weng
ePrint Report ePrint Report
We introduce a secure histogram aggregates method which is suitable for many applications such as ad conversion measurements. Our solution relies on three-party computation with linear complexity and guarantees differentially private histogram outputs. We formally analyse the security and privacy of our method and compare it with existing proposals. Finally, we conclude our report with a performance analysis.
Expand
Kotaro Abe, Makoto Ikeda
ePrint Report ePrint Report
Lattice attacks are threats to (EC)DSA and have been used in cryptanalysis. In lattice attacks, a few bits of nonce leaks in multiple signatures are sufficient to recover the secret key. Currently, the BKZ algorithm is frequently used as a lattice reduction algorithm for lattice attacks, and there are many reports on the conditions for successful attacks. However, experimental attacks using the BKZ algorithm have only shown results for specific key lengths, and it is not clear how the conditions change as the key length changes. In this study, we conducted some experiments to simulate lattice attacks on P256, P384, and P521 and confirmed that attacks on P256 with 3 bits nonce leak, P384 with 4 bits nonce leak, and P521 with 5 bits nonce leak are feasible. The result for P521 is a new record. We also investigated in detail the reasons for the failure of the attacks and proposed a model to estimate the feasibility of lattice attacks using the BKZ algorithm. We believe that this model can be used to estimate the effectiveness of lattice attacks when the key length is changed.
Expand
Maria Corte-Real Santos, Craig Costello, Jia Shi
ePrint Report ePrint Report
We give a new algorithm for finding an isogeny from a given supersingular elliptic curve $E/\mathbb{F}_{p^2}$ to a subfield elliptic curve $E'/\mathbb{F}_p$, which is the bottleneck step of the Delfs-Galbraith algorithm for the general supersingular isogeny problem. Our core ingredient is a novel method of rapidly determining whether a polynomial $f \in L[X]$ has any roots in a subfield $K \subset L$, while crucially avoiding expensive root-finding algorithms. In the special case when $f=\Phi_{\ell,p}(X,j) \in \mathbb{F}_{p^2}[X]$, i.e. when $f$ is the $\ell$-th modular polynomial evaluated at a supersingular $j$-invariant, this provides a means of efficiently determining whether there is an $\ell$-isogeny connecting the corresponding elliptic curve to a subfield curve. Together with the traditional Delfs-Galbraith walk, inspecting many $\ell$-isogenous neighbours in this way allows us to search through a larger proportion of the supersingular set per unit of time. Though the asymptotic $\tilde{O}(p^{1/2})$ complexity of our improved algorithm remains unchanged from that of the original Delfs-Galbraith algorithm, our theoretical analysis and practical implementation both show a significant reduction in the runtime of the subfield search. This sheds new light on the concrete hardness of the general supersingular isogeny problem, the foundational problem underlying isogeny-based cryptography.
Expand
Ghada Arfaoui, Pierre-Alain Fouque, Thibaut Jacques, Pascal Lafourcade, Adina Nedelcu, Cristina Onete, Léo Robert
ePrint Report ePrint Report
Deep attestation is a particular case of remote attestation, i.e., verifying the integrity of a platform with a remote verification server. We focus on the remote attestation of hypervisors and their hosted virtual machines (VM), for which two solutions are currently supported by ETSI. The first is single-channel attestation, requiring for each VM an attestation of that VM and the underlying hypervisor through the physical TPM. The second, multi-channel attestation, allows to attest VMs via virtual TPMs and separately from the hypervisor -- this is faster and requires less overall attestations, but the server cannot verify the link between VM and hypervisor attestations, which comes for free for single-channel attestation. We design a new approach to provide linked remote attestation which achieves the best of both worlds: we benefit from the efficiency of multi-channel attestation while simultaneously allowing attestations to be linked. Moreover, we formalize a security model for deep attestation and prove the security of our approach. Our contribution is agnostic of the precise underlying secure component (which could be instantiated as a TPM or something equivalent) and can be of independent interest. Finally, we implement our proposal using TPM 2.0 and vTPM (KVM/QEMU), and show that it is practical and efficient.
Expand
Thomas Espitau, Pierre-Alain Fouque, François Gérard, Mélissa Rossi, Akira Takahashi, Mehdi Tibouchi, Alexandre Wallet, Yang Yu
ePrint Report ePrint Report
This work describes the Mitaka signature scheme: a new hash-and-sign signature scheme over NTRU lattices which can be seen as a variant of NIST finalist Falcon. It achieves comparable efficiency but is considerably simpler, online/offline, and easier to parallelize and protect against side-channels, thus offering significant advantages from an implementation standpoint. It is also much more versatile in terms of parameter selection. We obtain this signature scheme by replacing the FFO lattice Gaussian sampler in Falcon by the ``hybrid'' sampler of Ducas and Prest, for which we carry out a detailed and corrected security analysis. In principle, such a change can result in a substantial security loss, but we show that this loss can be largely mitigated using new techniques in key generation that allow us to construct much higher quality lattice trapdoors for the hybrid sampler relatively cheaply. This new approach can also be instantiated on a wide variety of base fields, in contrast with Falcon's restriction to power-of-two cyclotomics. We also introduce a new lattice Gaussian sampler with the same quality and efficiency, but which is moreover compatible with the integral matrix Gram root technique of Ducas et al., allowing us to avoid floating point arithmetic. This makes it possible to realize the same signature scheme as Mitaka efficiently on platforms with poor support for floating point numbers. Finally, we describe a provably secure masking of Mitaka. More precisely, we introduce novel gadgets that allow provable masking at any order at much lower cost than previous masking techniques for Gaussian sampling-based signature schemes, for cheap and dependable side-channel protection.
Expand
◄ Previous Next ►