International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

28 September 2022

Shengtong Zhang
ePrint Report ePrint Report
Let $P(x, y)$ be a bivariate polynomial with coefficients in $\mathbb{C}$. Form the $n \times n$ matrices $L_n$ whose elements are defined by $P(i, j)$. Define the matrices $M_n = I_n - L_n $.

We show that $\mu_n = \det(M_n)$ is a polynomial in $n$, thus answering a conjecture of Naccache and Yifrach.
Expand
Deevashwer Rathee, Guru Vamsi Policharla, Tiancheng Xie, Ryan Cottone, Dawn Song
ePrint Report ePrint Report
ZEBRA is an Anonymous Credential (AC) scheme, supporting auditability and revocation, that provides practical on-chain verification for the first time. It realizes efficient access control on permissionless blockchains while achieving both privacy and accountability. In all prior solutions, users either pay exorbitant fees or lose privacy since authorities granting access can map users to their wallets. Hence, ZEBRA is the first to enable DeFi platforms to remain compliant with imminent regulations without compromising user privacy.

We evaluate ZEBRA and show that it reduces the gas cost incurred on the Ethereum Virtual Machine (EVM) by 11.8x when compared to Coconut [NDSS 2019], the state-of-the-art AC scheme for blockchains. This translates to a reduction in transaction fees from 94 USD to 8 USD on Ethereum in August 2022. However, 8 USD is still high for most applications, and ZEBRA further drives down credential verification costs through batched verification. For a batch of 512 layer-1 and layer-2 wallets, the gas cost is reduced by 35x and 641x on EVM, and the transaction fee is reduced to just 0.23 USD and 0.0126 USD on Ethereum, respectively. For perspective, these costs are comparable to the minimum transaction costs on Ethereum.
Expand
Mohammad Mahmoody, Wei Qi, Ahmadreza Rahimi
ePrint Report ePrint Report
Registration-based encryption (Garg, Hajiabadi, Mahmoody, Rahimi, TCC'18) aims to offer what identity-based encryption offers without the key-escrow problem, which refers to the ability of the private-key generator to obtain parties' decryption keys at wish. In RBE, parties generate their own secret and public keys and register their public keys to the key curator (KC) who updates a compact public parameter after each registration. The updated public parameter can then be used to securely encrypt messages to registered identities.

A major drawback of RBE, compared with IBE, is that in order to decrypt, parties might need to periodically request so-called decryption updates from the KC. Current RBE schemes require $\Omega(\log n)$ number of updates after $n$ registrations, while the public parameter is of length $\text{poly}(\log n)$. Clearly, it would be highly desirable to have RBEs with only, say, a constant number of updates. This leads to the following natural question: are so many (logarithmic) updates necessary for RBE schemes, or can we decrease the frequency of updates significantly?

In this paper, we prove an almost tight lower bound for the number of updates in RBE schemes, as long as the times that parties receive updates only depend on the registration time of the parties, which is a natural property that holds for all known RBE constructions. More generally, we prove a trade-off between the number of updates in RBEs and the length of the public parameter for any scheme with fixed update times. Indeed, we prove that for any such RBE scheme, if there are $n \geq \binom{k+d}{d+1}$ identities that receive at most $d$ updates, the public parameter needs to be of length $\Omega(k)$. As a corollary, we find that RBE systems with fixed update times and public parameters of length $\text{poly} (\log n)$, require $\Omega(\log n/\text{loglog}\ n)$ decryption updates, which is optimal up to a $O(\text{loglog}\ n)$ factor.
Expand
Valerio Cini, Sebastian Ramacher, Daniel Slamanig, Christoph Striecks, Erkan Tairi
ePrint Report ePrint Report
We propose a novel variant of functional encryption which supports ciphertext updates, dubbed ciphertext updatable functional encryption (CUFE). Such a feature further broadens the practical applicability of the functional encryption paradigm and is carried out via so-called update tokens. However, allowing update tokens requires some care for the security definition as we want that updates can be done by any semi-trusted third party and only on ciphertexts. Our contribution is three-fold: a) We define our new primitive with a security notion in the indistinguishability setting. Within CUFE, functional decryption keys and ciphertexts are labeled with tags such that only if the tag of the decryption key and the ciphertext match, then decryption succeeds. Furthermore, we allow ciphertexts to switch their tags to any other tag via update tokens. Such tokens are generated by the holder of the main secret key and can only be used in the desired direction. b) We present a generic construction of CUFE for any functionality as well as predicates different from equality testing on tags, which relies on the existence of (probabilistic) indistinguishability obfuscation (iO). c) We present a practical construction of CUFE for the inner-product functionality from standard assumptions (i.e., LWE) in the random-oracle model. On the technical level, we build on the recent functional encryption schemes with fine-grained access control and linear operations on encrypted data (Abdalla et al., AC'20) and introduce an additional ciphertext updatability feature. Proving security for such a construction turned out to be non-trivial, particularly when revealing keys for the updated challenge ciphertext is allowed. Overall, such construction enriches the set of known inner-product functional-encryption schemes with the additional updatability feature of ciphertexts.
Expand
Rémy Oudompheng, Giacomo Pope
ePrint Report ePrint Report
This note describes the implementation of the Castryck-Decru key recovery attack on SIDH using the computer algebra system, SageMath. We describe in detail alternate computation methods for the isogeny steps of the original attack ($(2,2)$-isogenies from a product of elliptic curves and from a Jacobian), using explicit formulas to compute values of these isogenies at given points, motivated by both performance considerations and working around SageMath limitations. A performance analysis is provided, with focus given to the various algorithmic and SageMath specific improvements made during development, which in total accumulated in approximately an eight-fold performance improvement compared with a naïve reimplementation of the proof of concept.
Expand
Rebecca Young, Luke Mather, Elisabeth Oswald
ePrint Report ePrint Report
Recent works on key rank estimation methods claim that algorithmic key rank estimation is too slow, and suggest two new ideas: replacing repeat attacks with simulated attacks (PS-TH-GE rank estimation), and a shortcut rank estimation method that works directly on distinguishing vector distributions (GEEA). We take these ideas and provide a comprehensive comparison between them and a performant implementation of a classical, algorithmic ranking approach, as well as some earlier work on estimating distinguisher distributions. Our results show, in contrast to the recent work, that the algorithmic ranking approach outperforms GEEA, and that simulation based ranks are unreliable.
Expand
Zheng Yang, Tien Tuan Anh Dinh, Chao Yin, Yingying Yao, Dianshi Yang, Xiaolin Chang, Jianying Zhou
ePrint Report ePrint Report
Vehicle-to-everything (V2X) communication is the key enabler for emerging intelligent transportation systems. Applications built on top of V2X require both authentication and privacy protection for the vehicles. The common approach to meet both requirements is to use pseudonyms which are short-term identities. However, both industrial standards and state-of-the-art research are not designed for resource-constrained environments. In addition, they make a strong assumption about the security of the vehicle's on-board computation units. In this paper, we propose a lightweight auto-refreshing pseudonym protocol LARP for V2X. LARP supports efficient operations for resource-constrained devices, and provides security even when parts of the vehicle are compromised. We provide formal security proof showing that the protocol is secure. We conduct experiments on a Raspberry Pi 4. The results demonstrate that LARP is feasible and practical.
Expand
Zheng Yang, Chenglu Jin, Jianting Ning, Zengpeng Li, Tien Tuan Anh Dinh, Jianying Zhou
ePrint Report ePrint Report
Time-based One-Time Password (TOTP) provides a strong second factor for user authentication. In TOTP, a prover authenticates to a verifier by using the current time and a secret key to generate an authentication token (or password) which is valid for a short time period. Our goal is to extend TOTP to the group setting, and to provide both authentication and privacy. To this end, we introduce a new authentication scheme, called Group TOTP (GTOTP), that allows the prover to prove that it is a member of an authenticated group without revealing its identity. We propose a novel construction that transforms any asymmetric TOTP scheme into a GTOTP scheme. Our approach combines Merkle tree and Bloom filter to reduce the verifier's states to constant sizes.

As a promising application of GTOTP, we show that GTOTP can be used to construct an efficient privacy-preserving Proof of Location (PoL) scheme. We utilize a commitment protocol, a privacy-preserving location proximity scheme, and our GTOTP scheme to build the PoL scheme, in which GTOTP is used not only for user authentication but also as a tool to glue up other building blocks. In the PoL scheme, with the help of some witnesses, a user can prove its location to a verifier, while ensuring the identity and location privacy of both the prover and witnesses. Our PoL scheme outperforms the alternatives based on group digital signatures. We evaluate our schemes on Raspberry Pi hardware, and demonstrate that they achieve practical performance. In particular, the password generation and verification time are in the order of microseconds and milliseconds, respectively, while the computation time of proof generation is less than $1$ second.
Expand
Jiashuo Liu, Jiongjiong Ren, Shaozhen Chen, ManMan Li
ePrint Report ePrint Report
In CRYPTO 2019, Gohr successfully applied deep learning to differential cryptanalysis against the NSA block cipher Speck32/64, achieving higher accuracy than traditional differential distinguishers. Until now, the improvement of neural differential distinguishers is a mainstream research direction in neuralaided cryptanalysis. But the current development of training data formats for neural distinguishers forms barriers: (1) The source of data features is limited to linear combinations of ciphertexts, which does not provide more learnable features to the training samples for improving the neural distinguishers. (2) Lacking breakthroughs in constructing data format for network training from the deep learning perspective. In this paper, considering both the domain knowledge about deep learning and information on differential cryptanalysis, we use the output features of the penultimate round to proposing a two-dimensional and non-realistic input data generation method of neural differential distinguishers. Then, we validate that the proposed new input data format has excellent features through experiments and theoretical analysis. Moreover, combining the idea of multiple ciphertext pairs, we generate two specific models for data input construction: MRMSP(Multiple Rounds Multiple Splicing Pairs) and MRMSD(Multiple Rounds Multiple Splicing Differences) and then build new neural distinguishers against Speck and Simon family, which effectively improve the performance compared with the previous works. To the best of our knowledge, our neural distinguishers achieve the longest rounds and the higher accuracy for NSA block ciphers Speck and Simon.
Expand
Erik Pohle, Aysajan Abidin, Bart Preneel
ePrint Report ePrint Report
Garbling schemes, a formalization of Yao's garbled circuit protocol, are useful cryptographic primitives both in privacy-preserving protocols and for secure two-party computation. In projective garbling schemes, $n$ values are assigned to each wire in the circuit. Current state-of-the-art schemes project two values. More concretely, we present a projective garbling scheme that assigns $2^n$ values to wires in a circuit comprising XOR and unary projection gates. A generalization of FreeXOR allows the XOR of wires with $2^n$ values to be very efficient. We then analyze the performance of our scheme by evaluating substitution-permutation ciphers. Using our proposal, we measure high-speed evaluation of the ciphers with a moderate increased cost in garbling and bandwidth. Theoretical analysis suggests that for evaluating the nine examined ciphers, one can expect a 4- to 70-fold increase in evaluation with at most a 4-fold increase in garbling cost and, at most, an 8-fold increase in communication cost when compared to state-of-the-art garbling schemes. In an offline/online setting, such as secure function evaluation as a service, the circuit garbling and communication to the evaluator can proceed before the input phase. Thus our scheme offers a fast online phase. Furthermore, we present efficient computation formulas for the S-boxes of TWINE and Midori64 in Boolean circuits. To our knowledge, our formulas give the smallest number of AND gates for the S-boxes of these two ciphers.
Expand
Yihong Zhu, Wenping Zhu, Chen Chen, Min Zhu, Zhengdong Li, Shaojun Wei, Leibo Liu
ePrint Report ePrint Report
Classic McEliece is a code-based quantum-resistant public-key scheme characterized with relative high encapsulation/decapsulation speed and small cipher- texts, with an in-depth analysis on its security. However, slow key generation with large public key size make it hard for wider applications. Based on this observation, a high-throughput key generator in hardware, is proposed to accelerate the key generation in Classic McEliece based on algorithm-hardware co-design. Meanwhile the storage overhead caused by large-size keys is also minimized. First, compact large-size GF(2) Gauss elimination is presented by adopting naive processing array, singular matrix detection-based early abort, and memory-friendly scheduling strategy. Second, an optimized constant-time hardware sorter is proposed to support regular memory accesses with less comparators and storage. Third, algorithm-level pipeline is enabled for high-throughput processing, allowing for concurrent key generation based on decoupling between data access and computation.
Expand

27 September 2022

Bangkok, Thailand, 30 January - 3 February 2023
School School
Event date: 30 January to 3 February 2023
Expand

26 September 2022

Siemen Dhooghe, Aein Rezaei Shahmirzadi, Amir Moradi
ePrint Report ePrint Report
In this paper, we introduce a second-order masking of the AES using the minimal number of shares and a total of 1268 bits of randomness including the sharing of the plaintext and key. The masking of the S-box is based on the tower field decomposition of the inversion over bytes where the changing of the guards technique is used in order to re-mask the middle branch of the decomposition. The sharing of the S-box is carefully crafted such that it achieves first-order probing security without the use of randomness and such that the sharing of its output is uniform. Multi-round security is achieved by re-masking the state where we use a theoretical analysis based on the propagation of probed information to reduce the demand for fresh randomness per round. The result is a second-order masked AES which competes with the state-of-the-art in terms of latency and area, but reduces the randomness complexity over eight times over the previous known works. In addition to the corresponding theoretical analysis and proofs for the security of our masked design, it has been implemented on FPGA and evaluated via lab analysis.
Expand
Alexandre Duc, Robin Müller, Damian Vizár
ePrint Report ePrint Report
The notion of distributed authenticated encryption was formally introduced by Agrawal et al. in ACM CCS 2018. In their work, they propose the DiSE construction building upon a distributed PRF (DPRF), a commitment scheme and a PRG. We show that most of their constructions do not meet some of the claimed security guarantees. In fact, all the concrete instantiations of DiSE, as well as multiple follow-up papers (one accepted at ACM CCS 2021), fail to satisfy their strongly-secure definitions. We give simple fixes for these constructions and prove their security. We also propose a new construction DiAE using an encryptment instead of a commitment. This modification dispenses with the need to buffer the entire message throughout the encryption protocol, which in turn enables implementations with constant RAM footprint and online message encryption. This is particularly interesting for constrained IoT devices. Finally, we implement and benchmark DiAE and show that it performs similarly to the original DiSE construction.
Expand
Paweł Cyprys, Shlomi Dolev, Shlomo Moran
ePrint Report ePrint Report
The question whether one way functions (i.e., functions that are easy to compute but hard to invert) exist is arguably one of the central problems in complexity theory, both from theoretical and practical aspects. While proving that such functions exist could be hard, there were quite a few attempts to provide functions which are one way "in practice", namely, they are easy to compute, but there are no known polynomial time algorithms {that} compute their (generalized) inverse (or that computing their inverse is as hard as notoriously difficult tasks, like factoring very large integers).

In this paper we study a different approach. We provide a simple heuristic, called self masking, which converts a given polynomial time computable function $f$ into a self masked version ${f}$, which satisfies the following: for a random input $x$, ${f}^{-1}({f}(x))=f^{-1}(f(x))$ w.h.p., but a part of $f(x)$, which is essential for computing $f^{-1}(f(x))$ is {\it masked} in ${f}(x)$. Intuitively, this masking makes it hard to convert an efficient algorithm which computes $f^{-1}$ to an efficient algorithm which computes ${f}^{-1}$, since the masked parts are available to $f$ but not to ${f}$. We apply this technique on variants of the subset sum problem which were studied in the context of one way functions, and obtain functions which, to the best of our knowledge, cannot be inverted in polynomial time by published techniques.
Expand
David Naccache, Ofer Yifrach-Stav
ePrint Report ePrint Report
This note describes an observation discovered during a failed cryptanalysis attempt.

Let $P(x,y)$ be a bivariate polynomial with coefficients in $\mathbb{C}$. Form the $n\times n$ matrices $L(n)$ whose elements are defined by $P(i,j)$. Define the matrices $M(n)=L(n)-\mbox{ID}_n$.

It appears that $\mu(n)=(-1)^n\det(M_n)$ is a polynomial in $n$ that we did not characterize.

We provide a numerical example.
Expand
Nir Bitansky, Arka Rai Choudhuri, Justin Holmgren, Chethan Kamath, Alex Lombardi, Omer Paneth, Ron D. Rothblum
ePrint Report ePrint Report
One of the most fundamental results in game theory is that every finite strategic game has a Nash equilibrium, an assignment of (randomized) strategies to players with the stability property that no individual player can benefit from deviating from the assigned strategy. It is not known how to efficiently compute such a Nash equilibrium --- the computational complexity of this task is characterized by the class PPAD, but the relation of PPAD to other problems and well-known complexity classes is not precisely understood. In recent years there has been mounting evidence, based on cryptographic tools and techniques, showing the hardness of PPAD. We continue this line of research by showing that PPAD is as hard as learning with errors (LWE) and the iterated squaring (IS) problem, two standard problems in cryptography. Our work improves over prior hardness results that relied either on (1) sub-exponential assumptions, or (2) relied on ``obfustopia,'' which can currently be based on a particular combination of three assumptions. Our work additionally establishes public-coin hardness for PPAD (computational hardness for a publicly sampleable distribution of instances) that seems out of reach of the obfustopia approach. Following the work of Choudhuri et al. (STOC 2019) and subsequent works, our hardness result is obtained by constructing an unambiguous and incrementally-updatable succinct non-interactive argument for IS, whose soundness relies on polynomial hardness of LWE. The result also implies a verifiable delay function with unique proofs, which may be of independent interest.
Expand
Bowen LIU, Qiang TANG
ePrint Report ePrint Report
Modern SVD computation dates back to work in the 1960s that proposed the basis for the eigensystem package and linear algebra package routines. As a result of a long history of research, SVD is now widely applied in various scenarios, such as recommendation system and principal component analysis. Furthermore, federated SVD has emerged as a prevalent privacy-preserving technique. For example, the raw data are not required to be exchanged among different parties; instead, each party trains and processes locally and shares intermediate result. In general, there are two main categories: SVD over horizontally and vertically partitioned data. Imagine a dataset matrix M, where each row stands for a record from a data subject, and the columns stand for the attributes/features of the records. In the horizontally partitioned setting, each party holds a disjoint subset of the rows of M. While in the vertically partitioned setting, each party has a disjoint subset of the columns of M for all the rows. In real-world applications, the horizontally partitioned setting is much more common than the vertically partitioned setting. In this paper, we have proposed a privacy-preserving federated SVD scheme with secure aggregation. The proposed scheme can aggregate SVD results (eigenspace) from different devices and synchronise the aggregation result with all devices while maintaining privacy protection.
Expand
Basavesh Ammanaghatta Shivakumar, Gilles Barthe, Benjamin Grégoire, Vincent Laporte, Tiago Oliveira, Swarn Priya, Peter Schwabe, Lucas Tabary-Maujean
ePrint Report ePrint Report
The current gold standard of cryptographic software is to write efficient libraries with systematic protections against timing attacks. In order to meet this goal, cryptographic engineers increasingly use high-assurance cryptography tools. These tools guide programmers and provide rigorous guarantees that can be verified independently by library users. However, high-assurance tools reason about overly simple execution models that elide micro-architectural leakage. Thus, implementations validated by high-assurance cryptography tools remain potentially vulnerable to micro-architectural attacks such as Spectre or Meltdown. Moreover, proposed countermeasures are not used in practice due to performance overhead.

We propose, analyze, implement and evaluate an approach for writing efficient cryptographic implementations that are protected against Spectre v1 attacks. Our approach ensures speculative constant-time, an information flow property which guarantees that programs are protected against Spectre v1. Speculative constant-time is enforced by means of a (value-dependent) information flow type system. The type system tracks security levels depending on whether execution is misspeculating. We implement our approach in the Jasmin framework for high assurance cryptography, and use it for protecting all implementations of an experimental cryptographic library that includes highly optimized implementations of symmetric primitives, of elliptic-curve cryptography, and of Kyber, a lattice-based KEM recently selected by NIST for standardization. The performance impact of our protections is very low; for example, less than 1% for Kyber and essentially zero for X25519.
Expand
Prabhanjan Ananth, Kai-Min Chung, Xiong Fan, Luowen Qian
ePrint Report ePrint Report
In recent years, functional encryption (FE) has established itself as one of the fundamental primitives in cryptography. The choice of model of computation to represent the functions associated with the functional keys plays a critical role in the complexity of the algorithms of an FE scheme. Historically, the functions are represented as circuits. However, this results in the decryption time of the FE scheme growing proportional to not only the worst case running time of the function but also the size of the input, which in many applications can be quite large.

In this work, we present the first construction of a public-key collusion-resistant FE scheme, where the functions, associated with the keys, are represented as random access machines (RAMs). We base the security of our construction on the existence of: (i) public-key collusion- resistant FE for circuits and, (ii) public-key doubly-efficient private-information retrieval [Boyle et al., Canetti et al., TCC 2017]. Our scheme enjoys many nice efficiency properties, including input-specific decryption time.

We also show how to achieve FE for RAMs in the bounded-key setting with weaker efficiency guarantees from laconic oblivious transfer, which can be based on standard cryptographic assumptions. En route to achieving our result, we present conceptually simpler constructions of succinct garbling for RAMs [Canetti et al., Chen et al., ITCS 2016] from weaker assumptions.
Expand
◄ Previous Next ►