International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 May 2021

Alex May, Floyd Zweydinger
ePrint Report ePrint Report
Due to its amazing speed and multiplicative properties the Legendre PRF recently finds widespread applications e.g. in Ethereum 2.0, multiparty computation and in the quantum-secure signature proposal LegRoast. However, its security is not yet extensively studied.

The Legendre PRF computes for a key $k$ on input $x$ the Legendre symbol $L_k(x) = \left( \frac {x+k} {p} \right)$ in some finite field $\F_p$. As standard notion, PRF security is analysed by giving an attacker oracle access to $L_k(\cdot)$. Khovratovich's collision-based algorithm recovers $k$ using $L_k(\cdot)$ in time $\sqrt{p}$ with constant memory. It is a major open problem whether this birthday-bound complexity can be beaten.

We show a somewhat surprising wide-ranging analogy between the discrete logarithm problem and Legendre symbol computations. This analogy allows us to adapt various algorithmic ideas from the discrete logarithm setting.

More precisely, we present a small memory multiple-key attack on $m$ Legendre keys $k_1, \ldots, k_m$ in time $\sqrt{mp}$, i.e. with amortized cost $\sqrt{p/m}$ per key. This multiple-key attack might be of interest in the Ethereum context, since recovering many keys simultaneously maximizes an attacker's profit.

Moreover, we show that the Legendre PRF admits precomputation attacks, where the precomputation depends on the public $p$ only -- and not on a key $k$. Namely, an attacker may compute e.g. in precomputation time $p^{\frac 2 3}$ a hint of size $p^{\frac 1 3}$. On receiving access to $L_k(\cdot)$ in an online phase, the attacker then uses the hint to recover the desired key $k$ in time only $p^{\frac 1 3}$. Thus, the attacker's online complexity again beats the birthday-bound.

In addition, our precomputation attack can also be combined with our multiple-key attack. We explicitly give various tradeoffs between precomputation and online phase. E.g. for attacking $m$ keys one may spend time $mp^{\frac 2 3}$ in the precomputation phase for constructing a hint of size $m^2 p^{\frac 1 3}$. In an online phase, one then finds {\em all $m$ keys in total time} only $p^{\frac 1 3}$.

Precomputation attacks might again be interesting in the Ethereum 2.0 context, where keys are frequently changed such that a heavy key-independent precomputation pays off.
Expand
Christopher Battarbee, Delaram Kahrobaei, Siamak F. Shahandashti
ePrint Report ePrint Report
It was recently demonstrated that the Matrix Action Key Exchange (MAKE) algorithm, a new type of key exchange protocol using the semidirect product of matrix groups, is vulnerable to a linear algebraic attack if the matrices are over a commutative ring. In this note, we establish conditions under which protocols using matrices over a non-commutative ring are also vulnerable to this attack. We then demonstrate that group rings $R[G]$ used in a similar key exchange protocol, where $R$ is a commutative ring and $G$ is a non-abelian group, are examples of non-commutative rings that satisfy these conditions.
Expand
Virtual event, Anywhere on Earth, 10 November - 11 November 2021
Event Calendar Event Calendar
Event date: 10 November to 11 November 2021
Submission deadline: 14 June 2021
Notification: 9 July 2021
Expand
Virtual event, Anywhere on Earth, 6 September - 9 September 2021
Event Calendar Event Calendar
Event date: 6 September to 9 September 2021
Submission deadline: 30 May 2021
Notification: 30 June 2021
Expand

17 May 2021

Virtual event, Anywhere on Earth, 17 September 2021
Event Calendar Event Calendar
Event date: 17 September 2021
Submission deadline: 19 July 2021
Expand
Telecom Paris - Institut Polytechnique de Paris
Job Posting Job Posting
A permanent position is open at Telecom Paris, for an Assistant or Associate Professor in the Theory of Quantum Information and Computation. The successful candidate will contribute to the activities of the IQA group, at Telecom Paris, Institut Polytechnique de Paris and to the Quantum@Paris-Saclay center. We are looking for candidates who have a strong research record and a commitment to undergraduate and graduate education and training, and who have demonstrated their ability to carry out and develop a research activity combining computer science and quantum technologies. Areas of expertise may include, but are not limited to, quantum algorithms and complexity, quantum control, quantum communications and cryptography, and quantum information processing

Closing date for applications:

Contact: romain.alleaume@telecom-paris.fr

More information: https://institutminestelecom.recruitee.com/l/en/o/maitre-de-conferences-en-theorie-de-linformation-quantique-et-du-calcul-quantique-fh-a-telecom-paris-cdi

Expand
Muhammad ElSheikh, Amr M. Youssef
ePrint Report ePrint Report
With the introduction of the division trail, the bit-based division property (BDP) has become the most efficient method to search for integral distinguishers. The notation of the division trail allows us to automate the search process by modelling the propagation of the DBP as a set of constraints that can be solved using generic Mixed-integer linear programming (MILP) and SMT/SAT solvers. The current models for the basic operations and Sboxes are efficient and accurate. In contrast, the two approaches to model the propagation of the BDP for the non-bit-permutation linear layer are either inaccurate or inefficient. The first approach relies on decomposing the matrix multiplication of the linear layer into COPY and XOR operations. The model obtained by this approach is efficient, in terms of the number of the constraints, but it is not accurate and might add invalid division trails to the search space, which might lead to missing the balanced property of some bits. The second approach employs a one-to-one map between the valid division trails through the primitive matrix represented the linear layer and its invertible sub-matrices. Despite the fact that the current model obtained by this approach is accurate, it is inefficient, i.e., it produces a large number of constraints for large linear layers like the one of Kuznyechik. In this paper, we address this problem by utilizing the one-to-one map to propose a new MILP model and a search procedure for large non-bit-permutation layers. As a proof of the effectiveness of our approach, we improve the previous 3- and 4-round integral distinguishers of Kuznyechik and the 4-round one of PHOTON's internal permutation ($P_{288}$). We also report, for the fist time, a 4-round integral distinguisher for Kalyna block cipher and a 5-round integral distinguisher for PHOTON's internal permutation ($P_{288}$).
Expand
Nihal Vatandas, Rosario Gennaro, Bertrand Ithurburn, Hugo Krawczyk
ePrint Report ePrint Report
Offline deniability is the ability to a-posteriori deny having participated in a particular communication session. This property has been widely assumed for the Signal messaging application, yet no formal proof has appeared in the literature. In this paper, we present what we believe is the first formal study of the offline deniability of the Signal protocol. Our analysis shows that building a deniability proof for Signal is non-trivial and requires strong assumptions on the underlying mathematical groups where the protocol is run.

To do so, we study various *implicitly authenticated* key exchange protocols including MQV, HMQV and 3DH/X3DH, the latter being the core key agreement protocol in Signal. We first present examples of mathematical groups where running MQV results in a provably non-deniable interaction. While the concrete attack applies only to MQV, it also exemplifies the problems in attempting to prove the deniability of other implicitly authenticated protocols, such as 3DH. In particular, it shows that the intuition that the minimal transcript produced by these protocols suffices for ensuring deniability does not hold. We then provide a characterization of the groups where deniability holds, defined in terms of a knowledge assumption that extends the Knowledge of Exponent Assumption (KEA).

We conclude the paper by showing two additional positive results. The first is a general theorem that links the deniability of a communication session to the deniability of the key agreement protocol starting the session. This allows us to extend our results on the deniability of 3DH/X3DH to the entire Signal communication session.
Expand
William Zhang, Yu Xia
ePrint Report ePrint Report
We present advancements for interactive arguments with Hydra, a novel verifiable computation system. Hydra introduces two new disjoint interactive argument scheme protocols geared towards the efficient pipelining of circuit verification. The first is specific to subcircuits, where a deep circuit is broken up into smaller parts and proven concurrently. The second is a more general scheme where all layers of the circuit can be proven in parallel, removing the dependency on the layer-wise synchronous execution of the protocol. Compared to non-interactive SNARKs which rely on knowledge type assumptions (or the Random Oracle model) and theoretical non-interactive arguments based on standard assumptions that are not useful in practice, Hydra achieves a sweet spot with a practical approach. From standard assumptions, Hydra collapses the round complexity to polylogarithmic to the width of the circuit, but only incurs polylogarithmic blowup in bandwidth and verifier time complexity. We implement the full verification flow, including both protocols and a logic parser used to convert traditional logic circuit compilation outputs into provable layered arithmetic representations. We perform experimental evaluations of our proposals and demonstrate protocol time efficiency improvements of up to 34.8 times and 4.3 times respectively compared to traditional approaches on parallel hardware.
Expand
Marc Schink, Alexander Wagner, Florian Unterstein, Johann Heyszl
ePrint Report ePrint Report
Using passwords for authentication has been proven vulnerable in countless security incidents. Hardware authentication tokens effectively prevent most password-related security issues and improve security indisputably. However, we would like to highlight that there are new threats from attackers with physical access which need to be discussed. Supply chain adversaries may manipulate devices on a large scale and install backdoors before they even reach end users. In evil maid scenarios, specific devices may even be attacked while already in use. Hence, we thoroughly investigate the security and trustworthiness of eight commercially available open source authentication tokens, including devices from the two market leaders: SoloKeys and Nitrokey. Unfortunately, we identify and practically verify significant vulnerabilities in all eight examined tokens. Some of them based on severe, previously undiscovered, vulnerabilities of three major microcontroller products which are used at a large scale in various products. Our findings clearly emphasize the significant threat from supply chain and evil maid scenarios since the attacks are practical and only require moderate attacker efforts. Fortunately, we are able to describe software-based countermeasures as effective improvements to retrofit the examined devices. To improve the security and trustworthiness of future authentication tokens, we also derive important general design recommendations.
Expand
Charalampos Papamanthou, Cong Zhang, Hong-Sheng Zhou
ePrint Report ePrint Report
Digital signatures have been widely used as building blocks for constructing complex cryptosystems. To facilitate the security analysis of a complex system, we expect the underlying building blocks to achieve desirable composability. Notably, Canetti (FOCS 2001) and then Maurer et al (TCC 2004) propose analysis frameworks, the Universal Composability framework for cryptographic protocols, and the indifferentiability framework for cryptographic objects.

In this paper, we develop a “lifting strategy”, which allows us to compile multiple existing practical signature schemes using cyclic group (e.g., Schnorr, Boneh-Boyen), to achieve a very stringent security guarantee, in an idealized model of the generic (bilinear) group, without introducing much extra efficiency loss. What's more interesting is that, in our design, even the involved idealized model does not exist, our compiled construction will still be able to achieve the classical notion of unforgeability.

To achieve both indifferentiability and good efficiency, we develop new techniques in generic (bilinear) group model.
Expand
Ioanna Karantaidou, Foteini Baldimtsi
ePrint Report ePrint Report
Cryptographic accumulators are a crucial building block for a variety of applications where you need to represent a set of elements in a compact format while still being able to provide proofs of (non)membership. In this work, we give a number of accumulator constructions for the bilinear pairing setting in the trapdoor-based scenario, where a trusted manager maintains the accumulator. Using modular accumulator techniques, we first present the first optimally efficient (in terms of communication cost) dynamic, positive accumulators in the pairing setting. Additionally, we present a novel modular approach to construct universal accumulators that avoid costly non-membership proofs. We instantiate our generic construction and present the first universal accumulator in the bilinear pairing setting, that achieves constant parameter size, constant cost for element additions/deletions and witness generation by the manager, constant witness updates by the users and constant (non)membership verification. We finally show how our proposed universal accumulator construction can give rise to efficient ZK accumulators with constant non-membership witness updates.
Expand
Yevgeniy Dodis, Kevin Yeo
ePrint Report ePrint Report
In this work we challenge the common misconception that information-theoretic (IT) privacy is too impractical to be used in the real-world: we propose to build simple and $\textit{reusable}$ IT-encryption solutions whose only efficiency penalty (compared to computationally-secure schemes) comes from a large secret key size, which is often a rather minor inconvenience, as storage is cheap. In particular, our solutions are $\textit{stateless}$ and $\textit{locally computable at the optimal rate}$, meaning that honest parties do not maintain state and read only (optimally) small portions of their large keys with every use.

Moreover, we also propose a novel architecture for outsourcing the storage of these long keys to a network of semi-trusted servers, trading the need to store large secrets with the assumption that it is hard to simultaneously compromise too many publicly accessible ad-hoc servers. Our architecture supports $\textit{everlasting privacy}$ and $\textit{post-application security}$ of the derived one-time keys, resolving two major limitations of a related model for outsourcing key storage, called bounded storage model.

Both of these results come from nearly optimal constructions of so called $\textit{doubly-affine extractors}$: locally-computable, seeded extractors $\textbf{Ext}$(X,S) which are linear functions of X (for any fixed seed S), and protect against bounded affine leakage on X. This holds unconditionally, even if (a) affine leakage may $\textit{adaptively depend}$ on the extracted key R = $\textbf{Ext}$(X, S); and (b) the seed S is only $\textit{computationally}$ secure. Neither of properties are possible with general-leakage extractors.
Expand
Akinori Kawachi, Harumichi Nishimura
ePrint Report ePrint Report
The private simultaneous messages (PSM) model is a non-interactive version of the multiparty secure computation (MPC), which has been intensively studied to examine the communication cost of the secure computation. We consider its quantum counterpart, the private simultaneous quantum messages (PSQM) model, and examine the advantages of quantum communication and prior entanglement of this model.

In the PSQM model, $k$ parties $P_1,\ldots,P_k$ initially share a common random string (or entangled states in a stronger setting), and they have private classical inputs $x_1,\ldots, x_k$. Every $P_i$ generates a quantum message from the private input $x_i$ and the shared random string (entangled states), and then sends it to the referee $R$. Receiving the messages from the $k$ parties, $R$ computes $F(x_1,\ldots,x_k)$ from the messages. Then, $R$ learns nothing except for $F(x_1,\ldots,x_k)$ as the privacy condition.

We obtain the following results for this PSQM model. ($i$) We demonstrate that the privacy condition inevitably increases the communication cost in the two-party PSQM model as well as in the classical case presented by Applebaum, Holenstein, Mishra, and Shayevitz [Journal of Cryptology 33(3), 916--953 (2020)]. In particular, we prove a lower bound $(3-o(1))n$ of the communication complexity in PSQM protocols with a shared random string for random Boolean functions of $2n$-bit input, which is larger than the trivial upper bound $2n$ of the communication complexity without the privacy condition. ($ii$) We demonstrate a factor two gap between the communication complexity of PSQM protocols with shared entangled states and with shared random strings by designing a multiparty PSQM protocol with shared entangled states for a total function that extends the two-party equality function. ($iii$) We demonstrate an exponential gap between the communication complexity of PSQM protocols with shared entangled states and with shared random strings for a two-party partial function.
Expand
Ripon Patgiri
ePrint Report ePrint Report
Symmetric-key cryptography is used widely due to its capability to provide a strong defense against diverse attacks; however, it is prone to cryptanalysis attacks. Therefore, we propose a novel and highly secure symmetric-key cryptography, symKrypt for short, to defend against diverse attacks and provide absolute security. Our proposed algorithm changes private keys in each block of communication, i.e., symKrypt uses multiple private keys to encrypt a single block of a message. Moreover, symKrypt keeps secret the bit mixing of the original message with the private keys. Also, the number of private keys is kept secret. In addition, the private keys are generated dynamically based on the initial inputs using a pseudo-random number generator which is highly unpredictable and secure. In this article, we theoretically analyze the capabilities of symKrypt and provide experimental demonstration using millions of private keys to prove its correctness. Furthermore, we demonstrate the proposed pseudo-random number generator algorithm experimentally in NIST SP 800-22 statistical test suite. Our propose pseudo-random number generator passes all 15 tests in the said test suite. symKrypt is the first model to use multiple private keys in encryption yet lightweight and powerful.
Expand
Jakub Klemsa
ePrint Report ePrint Report
Unlike traditional and/or standardized ciphers, TFHE offers much space for the setup of its parameters. Not only the parameter choice affects the plaintext space size and security, it also greatly impacts the performance of TFHE, in particular, its bootstrapping. In this paper, we provide an exhaustive description of TFHE, including its foundations, (functional) bootstrapping and error propagation during all operations. In addition, we outline a bootstrapping scenario without the key switching step. Based on our thorough summary, we suggest an approach for the setup of TFHE parameters with particular respect to bootstrapping efficiency. Finally, we propose twelve setups of real-world TFHE parameters for six different scenarios with and without key switching, respectively, and we compare their performance. N.b.: This is a technical paper, which is mainly intended for researchers interested in TFHE. However, due to its self-containment, it shall be accessible also for readers with a basic knowledge of TFHE.
Expand
Gustavo Banegas, Daniel J. Bernstein, Fabio Campos, Tung Chou, Tanja Lange, Michael Meyer, Benjamin Smith, Jana Sotáková
ePrint Report ePrint Report
This paper introduces a new key space for CSIDH and a new algorithm for constant-time evaluation of the CSIDH group action. The key space is not useful with previous algorithms, and the algorithm is not useful with previous key spaces, but combining the new key space with the new algorithm produces speed records for constant-time CSIDH. For example, for CSIDH-512 with a 256-bit key space, the best previous constant-time results used 789000 multiplications and more than 200 million Skylake cycles; this paper uses 438006 multiplications and 125.53 million cycles.
Expand
Jan Camenisch, Manu Drijvers, Timo Hanke, Yvonne-Anne Pignolet, Victor Shoup, Dominic Williams
ePrint Report ePrint Report
We present the Internet Computer Consensus (ICC) family of protocols for atomic broadcast (a.k.a., consensus), which underpin the Byzantine fault-tolerant replicated state machines of the Internet Computer. The ICC protocols are leader-based protocols that assume partial synchrony, and that are fully integrated with a blockchain. The leader changes probabilistically in every round. These protocols are extremely simple and robust: in any round where the leader is corrupt (which itself happens with probability less than $1/3$), each ICC protocol will effectively allow another party to take over as leader for that round, with very little fuss, to move the protocol forward to the next round in a timely fashion. Unlike in many other protocols, there are no complicated subprotocols (such as "view change'' in PBFT) or unspecified subprotocols (such as "pacemaker'' in HotStuff). Moreover, unlike in many other protocols (such as PBFT and HotStuff), the task of reliably disseminating the blocks to all parties is an integral part the protocol, and not left to some other unspecified subprotocol. An additional property enjoyed by the ICC protocols (just like PBFT and HotStuff, and unlike others, such as Tendermint) is optimistic responsiveness, which means that when the leader is honest, the protocol will proceed at the pace of the actual network delay, rather than some upper bound on the network delay. We present three different protocols (along with various minor variations on each). One of these protocols (ICC1) is designed to be integrated with a peer-to-peer gossip sub-layer, which reduces the bottleneck created at the leader for disseminating large blocks, a problem that all leader-based protocols, like PBFT and HotStuff, must address, but typically do not. Our Protocol ICC2 addresses the same problem by substituting a low-communication reliable broadcast subprotocol (which may be of independent interest) for the gossip sub-layer.
Expand
Felix Engelmann, Lukas Müller, Andreas Peter, Frank Kargl, Christoph Bösch
ePrint Report ePrint Report
Decentralized token exchanges allow for secure trading of tokens without a trusted third party. However, decentralization is mostly achieved at the expense of transaction privacy. For a fair exchange, transactions must remain private to hide the participants and volumes while maintaining the possibility for non-interactive execution of trades. In this paper we present a swap confidential transaction system (SwapCT) which is related to ring confidential transactions (e.g. used in Monero) but supports multiple token types to trade among and enables secure, partial transactions for non-interactive swaps. We prove that SwapCT is secure in a strict, formal model and present its efficient performance in a prototype implementation with logarithmic signature sizes for large anonymity sets. For our construction we design an aggregatable signature scheme which might be of independent interest. Our SwapCT system thereby enables a secure and private exchange for tokens without a trusted third party.
Expand
Julien Devevey, Benoît Libert, Khoa Nguyen, Thomas Peters, Moti Yung
ePrint Report ePrint Report
We consider threshold public-key encryption, where the decryption servers distributively hold the private key shares, and we need a threshold of these servers to decrypt the message (while the system remains secure when less than the threshold is corrupt). We investigate the notion of chosen-ciphertext secure threshold systems which has been historically hard to achieve. We further require the systems to be, both, adaptively secure (i.e., secure against a strong adversary making corruption decisions dynamically during the protocol), and non-interactive (i.e., where decryption servers do not interact amongst themselves but rather efficiently contribute, each, a single message). To date, only pairing-based implementations were known to achieve security in the standard security model without relaxation (i.e., without assuming the random oracle idealization) under the above stringent requirements. Here, we investigate how to achieve the above using other assumptions (in order to understand what other algebraic building blocks and mathematical assumptions are needed to extend the domain of encryption methods achieving the above). Specifically, we show realizations under the Decision Composite Residuosity (DCR) and Learning-With-Errors (LWE) assumptions.
Expand
◄ Previous Next ►