International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 May 2021

Carlo Brunetta, Georgia Tsaloli, Bei Liang, Gustavo Banegas, Aikaterini Mitrokotsa
ePrint Report ePrint Report
We propose a novel primitive called NIVA that allows the distributed aggregation of multiple users' secret inputs by multiple untrusted servers. The returned aggregation result can be publicly verified in a non-interactive way, i.e. the users are not required to participate in the aggregation except for providing their secret inputs. NIVA allows the secure computation of the sum of a large amount of users' data and can be employed, for example, in the federated learning setting in order to aggregate the model updates for a deep neural network.

We implement NIVA and evaluate its communication and execution performance and compare it with the current state-of-the-art, i.e. Segal et al. protocol (CCS 2017) and Xu et al. VerifyNet protocol (IEEE TIFS 2020), resulting in better user's communicated data and
Expand
Behzad Abdolmaleki, Hamidreza Khoshakhlagh, Helger Lipmaa
ePrint Report ePrint Report
We define smooth zero-knowledge hash functions (SZKHFs) as smooth projective hash functions (SPHFs) for which the completeness holds even when the language parameter lpar and the projection key HP were maliciously generated. We prove that blackbox SZKHF in the plain model is impossible even if lpar was honestly generated. We then define SZKHF in the registered public key (RPK) model, where both lpar and HP are possibly maliciously generated but accepted by an RPK server, and show that the CRS-model trapdoor SPHFs of Benhamouda et al. are also secure in the weaker RPK model. Then, we define and instantiate subversion-zero knowledge SZKHF in the plain model. In this case, both lpar and HP are completely untrusted, but one uses non-blackbox techniques in the security proof.
Expand
Arsalan Javeed, Cemal Yilmaz, Erkay Savas
ePrint Report ePrint Report
In this work, we present a novel approach, called Detector+ , to detect, isolate, and prevent timing-based side channel attacks (i.e., timing attacks) at runtime. The proposed approach is based on a simple observation that the time measurements required by the timing attacks differ from those required by the benign applications as these attacks need to measure the execution times of typically quite short-running operations. Detector+ , therefore, monitors the time readings made by processes and mark consecutive pairs of readings that are close to each other in time as suspicious. In the presence of suspicious time measurements, Detector+ introduces noise into the measurements to prevent the attacker from extracting information by using these measurements. The sequence of suspicious time measurements are then analyzed by using a sliding window based approach to pinpoint the malicious processes at runtime. We have empirically evaluated the proposed approach by using five well known timing attacks, including Meltdown, together with their variations, representing some of the mechanisms that an attacker can employ to become stealthier. In one evaluation setup, each type of attack was carried out concurrently by multiple processes. In the other setup, multiple types of attacks were carried out concurrently. In all the experiments, Detector+ detected all the malicious time measurements with almost a perfect accuracy, prevented all the attacks, and correctly pinpointed all the malicious processes involved in the attacks without any false positives after they have made a few time measurements with an average runtime overhead of 1.56%.
Expand
Collin Chin, Howard Wu, Raymond Chu, Alessandro Coglio, Eric McCarthy, Eric Smith
ePrint Report ePrint Report
Decentralized ledgers that support rich applications suffer from three limitations. First, applications are provisioned tiny execution environments with limited running time, minimal stack size, and restrictive instruction sets. Second, applications must reveal their state transition, enabling miner frontrunning attacks and consensus instability. Third, applications offer weak guarantees of correctness and safety.

We design, implement, and evaluate Leo, a new programming language designed for formally verified, zero-knowledge applications. Leo provisions a powerful execution environment that is not restricted in running time, stack size, or instruction sets. Besides offering application privacy and mitigating miner-extractable value (MEV), Leo achieves two fundamental properties. First, applications are formally verified with respect to their high-level specification. Second, applications can be succinctly verified by anyone, regardless of the size of application.

Leo is the first known programming language to introduce a testing framework, package registry, import resolver, remote compiler, formally defined language, and theorem prover for general-purpose, zero-knowledge applications.
Expand
Gilles Barthe, Benjamin Gregoire, Vincent Laporte, Swarn Priya
ePrint Report ePrint Report
Many security properties of interest are captured by instrumented semantics that model the functional behavior and the leakage of programs. For several important properties, including cryptographic constant-time (CCT), leakage models are sufficiently abstract that one can define instrumented semantics for high-level and low-level programs. One important goal is then to relate leakage of source programs and leakage of their compilation---this can be used, e.g.\, to prove preservation of CCT. To simplify this task, we put forward the idea of structured leakage. In contrast to the usual modeling of leakage as a sequence of observations, structured leakage is tightly coupled with the operational semantics of programs. This coupling greatly simplifies the definition of leakage transformers that map the leakage of source programs to leakage of their compilation and yields more precise statements about the preservation of security properties. We illustrate our methods on the Jasmin compiler and prove preservation results for two policies of interest: CCT and cost.
Expand
Aurélien Dupin, Pierrick Méaux, Mélissa Rossi
ePrint Report ePrint Report
Goldreich's pseudorandom generator is a well-known building block for many theoretical cryptographic constructions from multi-party computation to indistinguishability obfuscation. Its unique efficiency comes from the use of random local functions: each bit of the output is computed by applying some fixed public n-variable Boolean function f to a random public size-n tuple of distinct input bits. The characteristics that a Boolean function f must have to ensure pseudorandomness is a puzzling issue. It has been studied in several works and particularly by Applebaum and Lovett (STOC 2016) who showed that resiliency and algebraic immunity are key parameters in this purpose. In this paper, we propose the first study on Boolean functions that reach together maximal algebraic immunity and high resiliency. 1) We assess the possible consequences of the asymptotic existence of such optimal functions. We show how they allow to build functions reaching all possible algebraic immunity-resiliency trade-offs (respecting the algebraic immunity and Siegenthaler bounds). We provide a new bound on the minimal number of variables n, and thus on the minimal locality, necessary to ensure a secure Goldreich pseudorandom generator. Our results come with a granularity level depending on the strength of our assumptions, from none to the conjectured asymptotic existence of optimal functions. 2) We extensively analyze the possible existence and the properties of such optimal functions. In a first step, we naturally focus on existing families of Boolean functions that are known optimal with respect to their algebraic immunity, starting by the promising XOR-MAJ functions. Interestingly, we were able to show that these families do not reach optimality with respect to their resiliency, and they could be beaten by optimal functions if our conjecture is verified. Thus, one needs to look in another direction for constructing optimal functions. We introduce necessary and sufficient conditions for the construction of optimal functions. Finally, we prove the existence of optimal functions in low number of variables by experimentally exhibiting some of them up to 12 variables. This directly provides better candidates for Goldreich's pseudorandom generator than the existing XOR-MAJ candidates for polynomial stretches from 2 to 6.
Expand
Mustafa Khairallah
ePrint Report ePrint Report
COFB is a lightweight authenticated encryption (AE) mode based on block ciphers, proposed in CHES 2017 and is the basis for GIFT-COFB, a finalist in the NIST lightweight standardization project. It comes with provable security results that guarantee its security up to the birthday bound in the nonce-respecting model. However, the designers offer multiple versions of this analysis with different details and the implications of attacks against the scheme are not discussed deeply. In this article, we look at different possible attacks against COFB-like designs against both forgery and confidentiality. We show that the security for both forgery and confidentiality is bounded by the amount of forgery attempts. In particular, we show the existence of forgery and confidentiality attacks with success probability $q_f/2^{n/2}$, given $q_f$ forgery attempts. In particular, we show that both forgery and confidentiality can be broken with $2^{n/2}$ attempts using only a single known-plaintext encryption query. While these attacks do not contradict the claims made by the GIFT-COFB designers, it shows its limitations in terms of the number of forgery attempts. It also shows that while GIFT-COFB generates a 128-bit tag it behaves in a very similar manner to an AE scheme with 64-bit tag. As an independent result, our analysis provides a contradiction to main in theorem of Journal of Cryptology volume 33, pages 703–741 (2020), which is an includes an improved security proof of COFB compared to the CHES 2017 version. Finally, we discuss the term $nq_f/2^{n/2}$ that appears in the security proof of GIFT-COFB and CHES 2017, showing why this term is unlikely to be tight and it is likely that $q_f/2^{n/2}$ is sufficient. We emphasize that the results in this article do not threaten the security of GIFT-COFB in the scope of the NIST lightweight cryptography requirements or the claims made by the designers in the specification of the design.
Expand
Ripon Patgiri
ePrint Report ePrint Report
RSA cryptography is an asymmetric communication protocol, and it is facing diverse issues. Recent research works suggest that RSA security has already broken. On the contrary, AES is the most used symmetric-key cryptography protocol, and it is also facing issues. Literature search suggests that there is an issue of cryptanalysis attacks. A shared secret key requires for AES cryptography. The most famous key exchange protocol is Diffie-Hellman; however, it has an issue of the number field sieve discrete log algorithm attacks. Moreover, recent research suggested that Diffie-Hellman is less secure than widely perceived. Moreover, there is another issue of Logjam attack that allows man-in-middle attack in Diffie-Hellman. Thus, we combine RSA, AES, and Diffie-Hellman algorithm to provide security on the key exchange protocol, called privateDH. Our key objective is to provide security to the Diffie-Hellman Algorithm. Therefore, privateDH does not share the data publicly with the intended party. Instead, privateDH encrypts all shareable data in the time of key exchange by encrypting using the AES algorithm. privateDH uses the RSA algorithm and retrieves the public key to avoid a man-in-the-middle attack. Thus, we demonstrate how to provide security to the Diffie-Hellman algorithm to defeat various kinds of attacks.
Expand
Cihangir Tezcan
ePrint Report ePrint Report
Graphics processing units (GPUs) are specially designed for parallel applications and perform parallel operations much faster than central processing units (CPUs). In this work, we focus on the performance of the Advanced Encryption Standard (AES) on GPUs. We present optimizations which remove bank conflicts in shared memory accesses and provide 878.6 Gbps throughput for AES-128 encryption on an RTX 2070 Super, which is equivalent to 4.1 Gbps per Watt. Our optimizations provide more than 2.56x speed-up against the best GPU results in the literature. Our optimized AES implementations on GPUs even outperform any CPU using the hardware level AES New Instructions (AES-NI) and legacy FPGA-based cluster architectures like COPACOBANA and RIVYERA. Even on a low-end GPU like MX 250, we obtained 60.0 Gbps throughput for AES-256 which is generally faster than the read/write speeds of solid disks. Thus, transition from AES-128 to AES-256 when using GPUs would provide military grade security with no visible performance loss. With these breakthrough performances, GPUs can be used as a cryptographic co-processor for file or full disk encryption to remove performance loss coming from CPU encryption. With a single GPU as a co-processor, busy SSL servers can be free from the burden of encryption and use their whole CPU power for other operations. Moreover, these optimizations can help GPUs to practically verify theoretically obtained cryptanalysis results or their reduced versions in reasonable time.
Expand
Alex May, Floyd Zweydinger
ePrint Report ePrint Report
Due to its amazing speed and multiplicative properties the Legendre PRF recently finds widespread applications e.g. in Ethereum 2.0, multiparty computation and in the quantum-secure signature proposal LegRoast. However, its security is not yet extensively studied.

The Legendre PRF computes for a key $k$ on input $x$ the Legendre symbol $L_k(x) = \left( \frac {x+k} {p} \right)$ in some finite field $\F_p$. As standard notion, PRF security is analysed by giving an attacker oracle access to $L_k(\cdot)$. Khovratovich's collision-based algorithm recovers $k$ using $L_k(\cdot)$ in time $\sqrt{p}$ with constant memory. It is a major open problem whether this birthday-bound complexity can be beaten.

We show a somewhat surprising wide-ranging analogy between the discrete logarithm problem and Legendre symbol computations. This analogy allows us to adapt various algorithmic ideas from the discrete logarithm setting.

More precisely, we present a small memory multiple-key attack on $m$ Legendre keys $k_1, \ldots, k_m$ in time $\sqrt{mp}$, i.e. with amortized cost $\sqrt{p/m}$ per key. This multiple-key attack might be of interest in the Ethereum context, since recovering many keys simultaneously maximizes an attacker's profit.

Moreover, we show that the Legendre PRF admits precomputation attacks, where the precomputation depends on the public $p$ only -- and not on a key $k$. Namely, an attacker may compute e.g. in precomputation time $p^{\frac 2 3}$ a hint of size $p^{\frac 1 3}$. On receiving access to $L_k(\cdot)$ in an online phase, the attacker then uses the hint to recover the desired key $k$ in time only $p^{\frac 1 3}$. Thus, the attacker's online complexity again beats the birthday-bound.

In addition, our precomputation attack can also be combined with our multiple-key attack. We explicitly give various tradeoffs between precomputation and online phase. E.g. for attacking $m$ keys one may spend time $mp^{\frac 2 3}$ in the precomputation phase for constructing a hint of size $m^2 p^{\frac 1 3}$. In an online phase, one then finds {\em all $m$ keys in total time} only $p^{\frac 1 3}$.

Precomputation attacks might again be interesting in the Ethereum 2.0 context, where keys are frequently changed such that a heavy key-independent precomputation pays off.
Expand
Christopher Battarbee, Delaram Kahrobaei, Siamak F. Shahandashti
ePrint Report ePrint Report
It was recently demonstrated that the Matrix Action Key Exchange (MAKE) algorithm, a new type of key exchange protocol using the semidirect product of matrix groups, is vulnerable to a linear algebraic attack if the matrices are over a commutative ring. In this note, we establish conditions under which protocols using matrices over a non-commutative ring are also vulnerable to this attack. We then demonstrate that group rings $R[G]$ used in a similar key exchange protocol, where $R$ is a commutative ring and $G$ is a non-abelian group, are examples of non-commutative rings that satisfy these conditions.
Expand
Virtual event, Anywhere on Earth, 10 November - 11 November 2021
Event Calendar Event Calendar
Event date: 10 November to 11 November 2021
Submission deadline: 14 June 2021
Notification: 9 July 2021
Expand
Virtual event, Anywhere on Earth, 6 September - 9 September 2021
Event Calendar Event Calendar
Event date: 6 September to 9 September 2021
Submission deadline: 30 May 2021
Notification: 30 June 2021
Expand

17 May 2021

Virtual event, Anywhere on Earth, 17 September 2021
Event Calendar Event Calendar
Event date: 17 September 2021
Submission deadline: 19 July 2021
Expand
Telecom Paris - Institut Polytechnique de Paris
Job Posting Job Posting
A permanent position is open at Telecom Paris, for an Assistant or Associate Professor in the Theory of Quantum Information and Computation. The successful candidate will contribute to the activities of the IQA group, at Telecom Paris, Institut Polytechnique de Paris and to the Quantum@Paris-Saclay center. We are looking for candidates who have a strong research record and a commitment to undergraduate and graduate education and training, and who have demonstrated their ability to carry out and develop a research activity combining computer science and quantum technologies. Areas of expertise may include, but are not limited to, quantum algorithms and complexity, quantum control, quantum communications and cryptography, and quantum information processing

Closing date for applications:

Contact: romain.alleaume@telecom-paris.fr

More information: https://institutminestelecom.recruitee.com/l/en/o/maitre-de-conferences-en-theorie-de-linformation-quantique-et-du-calcul-quantique-fh-a-telecom-paris-cdi

Expand
Muhammad ElSheikh, Amr M. Youssef
ePrint Report ePrint Report
With the introduction of the division trail, the bit-based division property (BDP) has become the most efficient method to search for integral distinguishers. The notation of the division trail allows us to automate the search process by modelling the propagation of the DBP as a set of constraints that can be solved using generic Mixed-integer linear programming (MILP) and SMT/SAT solvers. The current models for the basic operations and Sboxes are efficient and accurate. In contrast, the two approaches to model the propagation of the BDP for the non-bit-permutation linear layer are either inaccurate or inefficient. The first approach relies on decomposing the matrix multiplication of the linear layer into COPY and XOR operations. The model obtained by this approach is efficient, in terms of the number of the constraints, but it is not accurate and might add invalid division trails to the search space, which might lead to missing the balanced property of some bits. The second approach employs a one-to-one map between the valid division trails through the primitive matrix represented the linear layer and its invertible sub-matrices. Despite the fact that the current model obtained by this approach is accurate, it is inefficient, i.e., it produces a large number of constraints for large linear layers like the one of Kuznyechik. In this paper, we address this problem by utilizing the one-to-one map to propose a new MILP model and a search procedure for large non-bit-permutation layers. As a proof of the effectiveness of our approach, we improve the previous 3- and 4-round integral distinguishers of Kuznyechik and the 4-round one of PHOTON's internal permutation ($P_{288}$). We also report, for the fist time, a 4-round integral distinguisher for Kalyna block cipher and a 5-round integral distinguisher for PHOTON's internal permutation ($P_{288}$).
Expand
Nihal Vatandas, Rosario Gennaro, Bertrand Ithurburn, Hugo Krawczyk
ePrint Report ePrint Report
Offline deniability is the ability to a-posteriori deny having participated in a particular communication session. This property has been widely assumed for the Signal messaging application, yet no formal proof has appeared in the literature. In this paper, we present what we believe is the first formal study of the offline deniability of the Signal protocol. Our analysis shows that building a deniability proof for Signal is non-trivial and requires strong assumptions on the underlying mathematical groups where the protocol is run.

To do so, we study various *implicitly authenticated* key exchange protocols including MQV, HMQV and 3DH/X3DH, the latter being the core key agreement protocol in Signal. We first present examples of mathematical groups where running MQV results in a provably non-deniable interaction. While the concrete attack applies only to MQV, it also exemplifies the problems in attempting to prove the deniability of other implicitly authenticated protocols, such as 3DH. In particular, it shows that the intuition that the minimal transcript produced by these protocols suffices for ensuring deniability does not hold. We then provide a characterization of the groups where deniability holds, defined in terms of a knowledge assumption that extends the Knowledge of Exponent Assumption (KEA).

We conclude the paper by showing two additional positive results. The first is a general theorem that links the deniability of a communication session to the deniability of the key agreement protocol starting the session. This allows us to extend our results on the deniability of 3DH/X3DH to the entire Signal communication session.
Expand
William Zhang, Yu Xia
ePrint Report ePrint Report
We present advancements for interactive arguments with Hydra, a novel verifiable computation system. Hydra introduces two new disjoint interactive argument scheme protocols geared towards the efficient pipelining of circuit verification. The first is specific to subcircuits, where a deep circuit is broken up into smaller parts and proven concurrently. The second is a more general scheme where all layers of the circuit can be proven in parallel, removing the dependency on the layer-wise synchronous execution of the protocol. Compared to non-interactive SNARKs which rely on knowledge type assumptions (or the Random Oracle model) and theoretical non-interactive arguments based on standard assumptions that are not useful in practice, Hydra achieves a sweet spot with a practical approach. From standard assumptions, Hydra collapses the round complexity to polylogarithmic to the width of the circuit, but only incurs polylogarithmic blowup in bandwidth and verifier time complexity. We implement the full verification flow, including both protocols and a logic parser used to convert traditional logic circuit compilation outputs into provable layered arithmetic representations. We perform experimental evaluations of our proposals and demonstrate protocol time efficiency improvements of up to 34.8 times and 4.3 times respectively compared to traditional approaches on parallel hardware.
Expand
Marc Schink, Alexander Wagner, Florian Unterstein, Johann Heyszl
ePrint Report ePrint Report
Using passwords for authentication has been proven vulnerable in countless security incidents. Hardware authentication tokens effectively prevent most password-related security issues and improve security indisputably. However, we would like to highlight that there are new threats from attackers with physical access which need to be discussed. Supply chain adversaries may manipulate devices on a large scale and install backdoors before they even reach end users. In evil maid scenarios, specific devices may even be attacked while already in use. Hence, we thoroughly investigate the security and trustworthiness of eight commercially available open source authentication tokens, including devices from the two market leaders: SoloKeys and Nitrokey. Unfortunately, we identify and practically verify significant vulnerabilities in all eight examined tokens. Some of them based on severe, previously undiscovered, vulnerabilities of three major microcontroller products which are used at a large scale in various products. Our findings clearly emphasize the significant threat from supply chain and evil maid scenarios since the attacks are practical and only require moderate attacker efforts. Fortunately, we are able to describe software-based countermeasures as effective improvements to retrofit the examined devices. To improve the security and trustworthiness of future authentication tokens, we also derive important general design recommendations.
Expand
Charalampos Papamanthou, Cong Zhang, Hong-Sheng Zhou
ePrint Report ePrint Report
Digital signatures have been widely used as building blocks for constructing complex cryptosystems. To facilitate the security analysis of a complex system, we expect the underlying building blocks to achieve desirable composability. Notably, Canetti (FOCS 2001) and then Maurer et al (TCC 2004) propose analysis frameworks, the Universal Composability framework for cryptographic protocols, and the indifferentiability framework for cryptographic objects.

In this paper, we develop a “lifting strategy”, which allows us to compile multiple existing practical signature schemes using cyclic group (e.g., Schnorr, Boneh-Boyen), to achieve a very stringent security guarantee, in an idealized model of the generic (bilinear) group, without introducing much extra efficiency loss. What's more interesting is that, in our design, even the involved idealized model does not exist, our compiled construction will still be able to achieve the classical notion of unforgeability.

To achieve both indifferentiability and good efficiency, we develop new techniques in generic (bilinear) group model.
Expand
◄ Previous Next ►