International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 May 2021

Aurélien Dupin, Pierrick Méaux, Mélissa Rossi
ePrint Report ePrint Report
Goldreich's pseudorandom generator is a well-known building block for many theoretical cryptographic constructions from multi-party computation to indistinguishability obfuscation. Its unique efficiency comes from the use of random local functions: each bit of the output is computed by applying some fixed public n-variable Boolean function f to a random public size-n tuple of distinct input bits. The characteristics that a Boolean function f must have to ensure pseudorandomness is a puzzling issue. It has been studied in several works and particularly by Applebaum and Lovett (STOC 2016) who showed that resiliency and algebraic immunity are key parameters in this purpose. In this paper, we propose the first study on Boolean functions that reach together maximal algebraic immunity and high resiliency. 1) We assess the possible consequences of the asymptotic existence of such optimal functions. We show how they allow to build functions reaching all possible algebraic immunity-resiliency trade-offs (respecting the algebraic immunity and Siegenthaler bounds). We provide a new bound on the minimal number of variables n, and thus on the minimal locality, necessary to ensure a secure Goldreich pseudorandom generator. Our results come with a granularity level depending on the strength of our assumptions, from none to the conjectured asymptotic existence of optimal functions. 2) We extensively analyze the possible existence and the properties of such optimal functions. In a first step, we naturally focus on existing families of Boolean functions that are known optimal with respect to their algebraic immunity, starting by the promising XOR-MAJ functions. Interestingly, we were able to show that these families do not reach optimality with respect to their resiliency, and they could be beaten by optimal functions if our conjecture is verified. Thus, one needs to look in another direction for constructing optimal functions. We introduce necessary and sufficient conditions for the construction of optimal functions. Finally, we prove the existence of optimal functions in low number of variables by experimentally exhibiting some of them up to 12 variables. This directly provides better candidates for Goldreich's pseudorandom generator than the existing XOR-MAJ candidates for polynomial stretches from 2 to 6.
Expand
Mustafa Khairallah
ePrint Report ePrint Report
COFB is a lightweight authenticated encryption (AE) mode based on block ciphers, proposed in CHES 2017 and is the basis for GIFT-COFB, a finalist in the NIST lightweight standardization project. It comes with provable security results that guarantee its security up to the birthday bound in the nonce-respecting model. However, the designers offer multiple versions of this analysis with different details and the implications of attacks against the scheme are not discussed deeply. In this article, we look at different possible attacks against COFB-like designs against both forgery and confidentiality. We show that the security for both forgery and confidentiality is bounded by the amount of forgery attempts. In particular, we show the existence of forgery and confidentiality attacks with success probability $q_f/2^{n/2}$, given $q_f$ forgery attempts. In particular, we show that both forgery and confidentiality can be broken with $2^{n/2}$ attempts using only a single known-plaintext encryption query. While these attacks do not contradict the claims made by the GIFT-COFB designers, it shows its limitations in terms of the number of forgery attempts. It also shows that while GIFT-COFB generates a 128-bit tag it behaves in a very similar manner to an AE scheme with 64-bit tag. As an independent result, our analysis provides a contradiction to main in theorem of Journal of Cryptology volume 33, pages 703–741 (2020), which is an includes an improved security proof of COFB compared to the CHES 2017 version. Finally, we discuss the term $nq_f/2^{n/2}$ that appears in the security proof of GIFT-COFB and CHES 2017, showing why this term is unlikely to be tight and it is likely that $q_f/2^{n/2}$ is sufficient. We emphasize that the results in this article do not threaten the security of GIFT-COFB in the scope of the NIST lightweight cryptography requirements or the claims made by the designers in the specification of the design.
Expand
Ripon Patgiri
ePrint Report ePrint Report
RSA cryptography is an asymmetric communication protocol, and it is facing diverse issues. Recent research works suggest that RSA security has already broken. On the contrary, AES is the most used symmetric-key cryptography protocol, and it is also facing issues. Literature search suggests that there is an issue of cryptanalysis attacks. A shared secret key requires for AES cryptography. The most famous key exchange protocol is Diffie-Hellman; however, it has an issue of the number field sieve discrete log algorithm attacks. Moreover, recent research suggested that Diffie-Hellman is less secure than widely perceived. Moreover, there is another issue of Logjam attack that allows man-in-middle attack in Diffie-Hellman. Thus, we combine RSA, AES, and Diffie-Hellman algorithm to provide security on the key exchange protocol, called privateDH. Our key objective is to provide security to the Diffie-Hellman Algorithm. Therefore, privateDH does not share the data publicly with the intended party. Instead, privateDH encrypts all shareable data in the time of key exchange by encrypting using the AES algorithm. privateDH uses the RSA algorithm and retrieves the public key to avoid a man-in-the-middle attack. Thus, we demonstrate how to provide security to the Diffie-Hellman algorithm to defeat various kinds of attacks.
Expand
Cihangir Tezcan
ePrint Report ePrint Report
Graphics processing units (GPUs) are specially designed for parallel applications and perform parallel operations much faster than central processing units (CPUs). In this work, we focus on the performance of the Advanced Encryption Standard (AES) on GPUs. We present optimizations which remove bank conflicts in shared memory accesses and provide 878.6 Gbps throughput for AES-128 encryption on an RTX 2070 Super, which is equivalent to 4.1 Gbps per Watt. Our optimizations provide more than 2.56x speed-up against the best GPU results in the literature. Our optimized AES implementations on GPUs even outperform any CPU using the hardware level AES New Instructions (AES-NI) and legacy FPGA-based cluster architectures like COPACOBANA and RIVYERA. Even on a low-end GPU like MX 250, we obtained 60.0 Gbps throughput for AES-256 which is generally faster than the read/write speeds of solid disks. Thus, transition from AES-128 to AES-256 when using GPUs would provide military grade security with no visible performance loss. With these breakthrough performances, GPUs can be used as a cryptographic co-processor for file or full disk encryption to remove performance loss coming from CPU encryption. With a single GPU as a co-processor, busy SSL servers can be free from the burden of encryption and use their whole CPU power for other operations. Moreover, these optimizations can help GPUs to practically verify theoretically obtained cryptanalysis results or their reduced versions in reasonable time.
Expand
Alex May, Floyd Zweydinger
ePrint Report ePrint Report
Due to its amazing speed and multiplicative properties the Legendre PRF recently finds widespread applications e.g. in Ethereum 2.0, multiparty computation and in the quantum-secure signature proposal LegRoast. However, its security is not yet extensively studied.

The Legendre PRF computes for a key $k$ on input $x$ the Legendre symbol $L_k(x) = \left( \frac {x+k} {p} \right)$ in some finite field $\F_p$. As standard notion, PRF security is analysed by giving an attacker oracle access to $L_k(\cdot)$. Khovratovich's collision-based algorithm recovers $k$ using $L_k(\cdot)$ in time $\sqrt{p}$ with constant memory. It is a major open problem whether this birthday-bound complexity can be beaten.

We show a somewhat surprising wide-ranging analogy between the discrete logarithm problem and Legendre symbol computations. This analogy allows us to adapt various algorithmic ideas from the discrete logarithm setting.

More precisely, we present a small memory multiple-key attack on $m$ Legendre keys $k_1, \ldots, k_m$ in time $\sqrt{mp}$, i.e. with amortized cost $\sqrt{p/m}$ per key. This multiple-key attack might be of interest in the Ethereum context, since recovering many keys simultaneously maximizes an attacker's profit.

Moreover, we show that the Legendre PRF admits precomputation attacks, where the precomputation depends on the public $p$ only -- and not on a key $k$. Namely, an attacker may compute e.g. in precomputation time $p^{\frac 2 3}$ a hint of size $p^{\frac 1 3}$. On receiving access to $L_k(\cdot)$ in an online phase, the attacker then uses the hint to recover the desired key $k$ in time only $p^{\frac 1 3}$. Thus, the attacker's online complexity again beats the birthday-bound.

In addition, our precomputation attack can also be combined with our multiple-key attack. We explicitly give various tradeoffs between precomputation and online phase. E.g. for attacking $m$ keys one may spend time $mp^{\frac 2 3}$ in the precomputation phase for constructing a hint of size $m^2 p^{\frac 1 3}$. In an online phase, one then finds {\em all $m$ keys in total time} only $p^{\frac 1 3}$.

Precomputation attacks might again be interesting in the Ethereum 2.0 context, where keys are frequently changed such that a heavy key-independent precomputation pays off.
Expand
Christopher Battarbee, Delaram Kahrobaei, Siamak F. Shahandashti
ePrint Report ePrint Report
It was recently demonstrated that the Matrix Action Key Exchange (MAKE) algorithm, a new type of key exchange protocol using the semidirect product of matrix groups, is vulnerable to a linear algebraic attack if the matrices are over a commutative ring. In this note, we establish conditions under which protocols using matrices over a non-commutative ring are also vulnerable to this attack. We then demonstrate that group rings $R[G]$ used in a similar key exchange protocol, where $R$ is a commutative ring and $G$ is a non-abelian group, are examples of non-commutative rings that satisfy these conditions.
Expand
Virtual event, Anywhere on Earth, 10 November - 11 November 2021
Event Calendar Event Calendar
Event date: 10 November to 11 November 2021
Submission deadline: 14 June 2021
Notification: 9 July 2021
Expand
Virtual event, Anywhere on Earth, 6 September - 9 September 2021
Event Calendar Event Calendar
Event date: 6 September to 9 September 2021
Submission deadline: 30 May 2021
Notification: 30 June 2021
Expand

17 May 2021

Virtual event, Anywhere on Earth, 17 September 2021
Event Calendar Event Calendar
Event date: 17 September 2021
Submission deadline: 19 July 2021
Expand
Telecom Paris - Institut Polytechnique de Paris
Job Posting Job Posting
A permanent position is open at Telecom Paris, for an Assistant or Associate Professor in the Theory of Quantum Information and Computation. The successful candidate will contribute to the activities of the IQA group, at Telecom Paris, Institut Polytechnique de Paris and to the Quantum@Paris-Saclay center. We are looking for candidates who have a strong research record and a commitment to undergraduate and graduate education and training, and who have demonstrated their ability to carry out and develop a research activity combining computer science and quantum technologies. Areas of expertise may include, but are not limited to, quantum algorithms and complexity, quantum control, quantum communications and cryptography, and quantum information processing

Closing date for applications:

Contact: romain.alleaume@telecom-paris.fr

More information: https://institutminestelecom.recruitee.com/l/en/o/maitre-de-conferences-en-theorie-de-linformation-quantique-et-du-calcul-quantique-fh-a-telecom-paris-cdi

Expand
Muhammad ElSheikh, Amr M. Youssef
ePrint Report ePrint Report
With the introduction of the division trail, the bit-based division property (BDP) has become the most efficient method to search for integral distinguishers. The notation of the division trail allows us to automate the search process by modelling the propagation of the DBP as a set of constraints that can be solved using generic Mixed-integer linear programming (MILP) and SMT/SAT solvers. The current models for the basic operations and Sboxes are efficient and accurate. In contrast, the two approaches to model the propagation of the BDP for the non-bit-permutation linear layer are either inaccurate or inefficient. The first approach relies on decomposing the matrix multiplication of the linear layer into COPY and XOR operations. The model obtained by this approach is efficient, in terms of the number of the constraints, but it is not accurate and might add invalid division trails to the search space, which might lead to missing the balanced property of some bits. The second approach employs a one-to-one map between the valid division trails through the primitive matrix represented the linear layer and its invertible sub-matrices. Despite the fact that the current model obtained by this approach is accurate, it is inefficient, i.e., it produces a large number of constraints for large linear layers like the one of Kuznyechik. In this paper, we address this problem by utilizing the one-to-one map to propose a new MILP model and a search procedure for large non-bit-permutation layers. As a proof of the effectiveness of our approach, we improve the previous 3- and 4-round integral distinguishers of Kuznyechik and the 4-round one of PHOTON's internal permutation ($P_{288}$). We also report, for the fist time, a 4-round integral distinguisher for Kalyna block cipher and a 5-round integral distinguisher for PHOTON's internal permutation ($P_{288}$).
Expand
Nihal Vatandas, Rosario Gennaro, Bertrand Ithurburn, Hugo Krawczyk
ePrint Report ePrint Report
Offline deniability is the ability to a-posteriori deny having participated in a particular communication session. This property has been widely assumed for the Signal messaging application, yet no formal proof has appeared in the literature. In this paper, we present what we believe is the first formal study of the offline deniability of the Signal protocol. Our analysis shows that building a deniability proof for Signal is non-trivial and requires strong assumptions on the underlying mathematical groups where the protocol is run.

To do so, we study various *implicitly authenticated* key exchange protocols including MQV, HMQV and 3DH/X3DH, the latter being the core key agreement protocol in Signal. We first present examples of mathematical groups where running MQV results in a provably non-deniable interaction. While the concrete attack applies only to MQV, it also exemplifies the problems in attempting to prove the deniability of other implicitly authenticated protocols, such as 3DH. In particular, it shows that the intuition that the minimal transcript produced by these protocols suffices for ensuring deniability does not hold. We then provide a characterization of the groups where deniability holds, defined in terms of a knowledge assumption that extends the Knowledge of Exponent Assumption (KEA).

We conclude the paper by showing two additional positive results. The first is a general theorem that links the deniability of a communication session to the deniability of the key agreement protocol starting the session. This allows us to extend our results on the deniability of 3DH/X3DH to the entire Signal communication session.
Expand
William Zhang, Yu Xia
ePrint Report ePrint Report
We present advancements for interactive arguments with Hydra, a novel verifiable computation system. Hydra introduces two new disjoint interactive argument scheme protocols geared towards the efficient pipelining of circuit verification. The first is specific to subcircuits, where a deep circuit is broken up into smaller parts and proven concurrently. The second is a more general scheme where all layers of the circuit can be proven in parallel, removing the dependency on the layer-wise synchronous execution of the protocol. Compared to non-interactive SNARKs which rely on knowledge type assumptions (or the Random Oracle model) and theoretical non-interactive arguments based on standard assumptions that are not useful in practice, Hydra achieves a sweet spot with a practical approach. From standard assumptions, Hydra collapses the round complexity to polylogarithmic to the width of the circuit, but only incurs polylogarithmic blowup in bandwidth and verifier time complexity. We implement the full verification flow, including both protocols and a logic parser used to convert traditional logic circuit compilation outputs into provable layered arithmetic representations. We perform experimental evaluations of our proposals and demonstrate protocol time efficiency improvements of up to 34.8 times and 4.3 times respectively compared to traditional approaches on parallel hardware.
Expand
Marc Schink, Alexander Wagner, Florian Unterstein, Johann Heyszl
ePrint Report ePrint Report
Using passwords for authentication has been proven vulnerable in countless security incidents. Hardware authentication tokens effectively prevent most password-related security issues and improve security indisputably. However, we would like to highlight that there are new threats from attackers with physical access which need to be discussed. Supply chain adversaries may manipulate devices on a large scale and install backdoors before they even reach end users. In evil maid scenarios, specific devices may even be attacked while already in use. Hence, we thoroughly investigate the security and trustworthiness of eight commercially available open source authentication tokens, including devices from the two market leaders: SoloKeys and Nitrokey. Unfortunately, we identify and practically verify significant vulnerabilities in all eight examined tokens. Some of them based on severe, previously undiscovered, vulnerabilities of three major microcontroller products which are used at a large scale in various products. Our findings clearly emphasize the significant threat from supply chain and evil maid scenarios since the attacks are practical and only require moderate attacker efforts. Fortunately, we are able to describe software-based countermeasures as effective improvements to retrofit the examined devices. To improve the security and trustworthiness of future authentication tokens, we also derive important general design recommendations.
Expand
Charalampos Papamanthou, Cong Zhang, Hong-Sheng Zhou
ePrint Report ePrint Report
Digital signatures have been widely used as building blocks for constructing complex cryptosystems. To facilitate the security analysis of a complex system, we expect the underlying building blocks to achieve desirable composability. Notably, Canetti (FOCS 2001) and then Maurer et al (TCC 2004) propose analysis frameworks, the Universal Composability framework for cryptographic protocols, and the indifferentiability framework for cryptographic objects.

In this paper, we develop a “lifting strategy”, which allows us to compile multiple existing practical signature schemes using cyclic group (e.g., Schnorr, Boneh-Boyen), to achieve a very stringent security guarantee, in an idealized model of the generic (bilinear) group, without introducing much extra efficiency loss. What's more interesting is that, in our design, even the involved idealized model does not exist, our compiled construction will still be able to achieve the classical notion of unforgeability.

To achieve both indifferentiability and good efficiency, we develop new techniques in generic (bilinear) group model.
Expand
Ioanna Karantaidou, Foteini Baldimtsi
ePrint Report ePrint Report
Cryptographic accumulators are a crucial building block for a variety of applications where you need to represent a set of elements in a compact format while still being able to provide proofs of (non)membership. In this work, we give a number of accumulator constructions for the bilinear pairing setting in the trapdoor-based scenario, where a trusted manager maintains the accumulator. Using modular accumulator techniques, we first present the first optimally efficient (in terms of communication cost) dynamic, positive accumulators in the pairing setting. Additionally, we present a novel modular approach to construct universal accumulators that avoid costly non-membership proofs. We instantiate our generic construction and present the first universal accumulator in the bilinear pairing setting, that achieves constant parameter size, constant cost for element additions/deletions and witness generation by the manager, constant witness updates by the users and constant (non)membership verification. We finally show how our proposed universal accumulator construction can give rise to efficient ZK accumulators with constant non-membership witness updates.
Expand
Yevgeniy Dodis, Kevin Yeo
ePrint Report ePrint Report
In this work we challenge the common misconception that information-theoretic (IT) privacy is too impractical to be used in the real-world: we propose to build simple and $\textit{reusable}$ IT-encryption solutions whose only efficiency penalty (compared to computationally-secure schemes) comes from a large secret key size, which is often a rather minor inconvenience, as storage is cheap. In particular, our solutions are $\textit{stateless}$ and $\textit{locally computable at the optimal rate}$, meaning that honest parties do not maintain state and read only (optimally) small portions of their large keys with every use.

Moreover, we also propose a novel architecture for outsourcing the storage of these long keys to a network of semi-trusted servers, trading the need to store large secrets with the assumption that it is hard to simultaneously compromise too many publicly accessible ad-hoc servers. Our architecture supports $\textit{everlasting privacy}$ and $\textit{post-application security}$ of the derived one-time keys, resolving two major limitations of a related model for outsourcing key storage, called bounded storage model.

Both of these results come from nearly optimal constructions of so called $\textit{doubly-affine extractors}$: locally-computable, seeded extractors $\textbf{Ext}$(X,S) which are linear functions of X (for any fixed seed S), and protect against bounded affine leakage on X. This holds unconditionally, even if (a) affine leakage may $\textit{adaptively depend}$ on the extracted key R = $\textbf{Ext}$(X, S); and (b) the seed S is only $\textit{computationally}$ secure. Neither of properties are possible with general-leakage extractors.
Expand
Akinori Kawachi, Harumichi Nishimura
ePrint Report ePrint Report
The private simultaneous messages (PSM) model is a non-interactive version of the multiparty secure computation (MPC), which has been intensively studied to examine the communication cost of the secure computation. We consider its quantum counterpart, the private simultaneous quantum messages (PSQM) model, and examine the advantages of quantum communication and prior entanglement of this model.

In the PSQM model, $k$ parties $P_1,\ldots,P_k$ initially share a common random string (or entangled states in a stronger setting), and they have private classical inputs $x_1,\ldots, x_k$. Every $P_i$ generates a quantum message from the private input $x_i$ and the shared random string (entangled states), and then sends it to the referee $R$. Receiving the messages from the $k$ parties, $R$ computes $F(x_1,\ldots,x_k)$ from the messages. Then, $R$ learns nothing except for $F(x_1,\ldots,x_k)$ as the privacy condition.

We obtain the following results for this PSQM model. ($i$) We demonstrate that the privacy condition inevitably increases the communication cost in the two-party PSQM model as well as in the classical case presented by Applebaum, Holenstein, Mishra, and Shayevitz [Journal of Cryptology 33(3), 916--953 (2020)]. In particular, we prove a lower bound $(3-o(1))n$ of the communication complexity in PSQM protocols with a shared random string for random Boolean functions of $2n$-bit input, which is larger than the trivial upper bound $2n$ of the communication complexity without the privacy condition. ($ii$) We demonstrate a factor two gap between the communication complexity of PSQM protocols with shared entangled states and with shared random strings by designing a multiparty PSQM protocol with shared entangled states for a total function that extends the two-party equality function. ($iii$) We demonstrate an exponential gap between the communication complexity of PSQM protocols with shared entangled states and with shared random strings for a two-party partial function.
Expand
Ripon Patgiri
ePrint Report ePrint Report
Symmetric-key cryptography is used widely due to its capability to provide a strong defense against diverse attacks; however, it is prone to cryptanalysis attacks. Therefore, we propose a novel and highly secure symmetric-key cryptography, symKrypt for short, to defend against diverse attacks and provide absolute security. Our proposed algorithm changes private keys in each block of communication, i.e., symKrypt uses multiple private keys to encrypt a single block of a message. Moreover, symKrypt keeps secret the bit mixing of the original message with the private keys. Also, the number of private keys is kept secret. In addition, the private keys are generated dynamically based on the initial inputs using a pseudo-random number generator which is highly unpredictable and secure. In this article, we theoretically analyze the capabilities of symKrypt and provide experimental demonstration using millions of private keys to prove its correctness. Furthermore, we demonstrate the proposed pseudo-random number generator algorithm experimentally in NIST SP 800-22 statistical test suite. Our propose pseudo-random number generator passes all 15 tests in the said test suite. symKrypt is the first model to use multiple private keys in encryption yet lightweight and powerful.
Expand
Jakub Klemsa
ePrint Report ePrint Report
Unlike traditional and/or standardized ciphers, TFHE offers much space for the setup of its parameters. Not only the parameter choice affects the plaintext space size and security, it also greatly impacts the performance of TFHE, in particular, its bootstrapping. In this paper, we provide an exhaustive description of TFHE, including its foundations, (functional) bootstrapping and error propagation during all operations. In addition, we outline a bootstrapping scenario without the key switching step. Based on our thorough summary, we suggest an approach for the setup of TFHE parameters with particular respect to bootstrapping efficiency. Finally, we propose twelve setups of real-world TFHE parameters for six different scenarios with and without key switching, respectively, and we compare their performance. N.b.: This is a technical paper, which is mainly intended for researchers interested in TFHE. However, due to its self-containment, it shall be accessible also for readers with a basic knowledge of TFHE.
Expand
◄ Previous Next ►