International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

23 November 2022

Marcel Nageler, Felix Pallua, Maria Eichlseder
ePrint Report ePrint Report
Romulus-H is a hash function that currently competes as a finalist in the NIST Lightweight Cryptography competition. It is based on the Hirose DBL construction which is provably secure when used with an ideal block cipher. However, in practice, ideal block ciphers can only be approximated. The security of concrete instantiations must be cryptanalyzed carefully; the security margin may be higher or lower than in the secret-key setting. So far, the Hirose DBL construction has been studied with only a few other block ciphers, like IDEA and AES. However, Romulus-H uses Hirose DBL with the SKINNY block cipher where no dedicated analysis has been published so far.

In this work, we present the first third-party analysis of Romulus-H. We propose a new framework for finding collisions in hash functions based on the Hirose DBL construction. This is in contrast to previous work that only focused on free-start collisions. Our framework is based on the idea of joint differential characteristics which capture the relationship between the two block cipher calls in the Hirose DBL construction. To identify good joint differential characteristics, we propose a combination of a MILP and CP model. Then, we use these characteristics in another CP model to find collisions. Finally, we apply this framework to Romulus-H and find practical collisions of the hash function for 10 out of 40 rounds and practical semi-free-start collisions up to 14 rounds.
Expand
Tong Cao, Xin Li
ePrint Report ePrint Report
As of 28 January 2022, Filecoin is ranked as the first capitalized storage-oriented cryptocurrency. In this system, miners dedicate their storage space to the network and verify transactions to earn rewards. Nowadays, Filecoin's network capacity has surpassed 15 exbibytes.

In this paper, we propose three temporary block withholding attacks to challenge Filecoin's expected consensus (EC). Specifically, we first deconstruct EC following old-fashioned methods (which have been widely developed since 2009) to analyze the advantages and disadvantages of EC's design. We then present three temporary block withholding schemes by leveraging the shortcomings of EC. We build Markov Decision Process (MDP) models for the three attacks to calculate the adversary's gains. We develop Monte Carlo simulators to mimic the mining strategies of the adversary and other miners and indicate the impacts of the three attacks on expectation. As a result, we show that our three attacks have significant impacts on Filecoin's mining fairness and transaction throughput. For instance, when honest miners who control more than half the global storage power assemble their tipsets after the default transmission cutoff time, an adversary with 1% of the global storage power is able to launch temporary block withholding attacks without a loss in revenue, which is rare in existing blockchains. Finally, we discuss the implications of our attacks and propose several countermeasures to mitigate them.
Expand
Corentin Verhamme, Gaëtan Cassiers, François-Xavier Standaert
ePrint Report ePrint Report
We investigate the security of the NIST Lightweight Crypto Competition’s Finalists against side-channel attacks. We start with a mode-level analysis that allows us to put forward three candidates (As- con, ISAP and Romulus-T) that stand out for their leakage properties and do not require a uniform protection of all their computations thanks to (expensive) implementation-level countermeasures. We then implement these finalists and evaluate their respective performances. Our results confirm the interest of so-called leveled implementations (where only the key derivation and tag generation require security against differential power analysis). They also suggest that these algorithms differ more by their qualitative features (e.g., two-pass designs to improve confidentiality with decryption leakage vs. one-pass designs, flexible overheads thanks to masking vs. fully mode-level, easier to implement, schemes) than by their quantitative features, which all improve over the AES and are quite sensitive to security margins against cryptanalysis.
Expand
Siemen Dhooghe
ePrint Report ePrint Report
In this work, we introduce a more advanced fault adversary inspired from the random probing model, called the random fault model, where the adversary can fault all values in the algorithm but where the probability for each fault to occur is limited. The new adversary model is used to evaluate the security of side-channel and fault countermeasures such as Boolean masking, inner product masking, error detection techniques, error correction techniques, multiplicative tags, and shuffling methods. The results of the security analysis reveal novel insights including: error correction providing little security when faults target more bits; the order between masking and duplication providing a trade-off between side-channel and fault security; and inner product masking and multiplicative masking providing exponential protection in the field size. Moreover, the results also explain the experimental results from CHES 2022 and find weaknesses in the shuffling method from SAMOS 2021.
Expand
Dmitry Khovratovich, Mary Maller, Pratyush Ranjan Tiwari
ePrint Report ePrint Report
We present a candidate sequential function for a VDF protocol to be used within the Ethereum ecosystem. The new function, called MinRoot, is an optimized iterative algebraic transformation and is a strict improvement over competitors VeeDo and Sloth++. We analyze various attacks on sequentiality and suggest weakened versions for public scrutiny. We also announce bounties on certain research directions in cryptanalysis.
Expand
Siddhartha Chowdhury, Sayani Sinha, Animesh Singh, Shubham Mishra, Chandan Chaudhary, Sikhar Patranabis, Pratyay Mukherjee, Ayantika Chatterjee, Debdeep Mukhopadhyay
ePrint Report ePrint Report
Threshold Fully Homomorphic Encryption (ThFHE) enables arbitrary computation over encrypted data while keeping the decryption key to be distributed across multiple parties at all time. ThFHE is a key enabler for threshold cryptography and, more generally, secure distributed computing. Existing ThFHE schemes inherently require highly inefficient parameters and are unsuitable for practical deployment. In this paper, we take the first step towards to make ThFHE practically usable by (i) proposing a novel ThFHE scheme with a new analysis resulting in significantly improved parameters; (ii) and providing the first ThFHE implementation benchmark based on Torus FHE.

• We propose the first ThFHE scheme with a polynomial modulus-to-noise ratio that supports practically efficient parameters while retaining provable security based on standard quantum-safe assumptions. We achieve this via a novel Rényi divergence-based security analysis of our proposed threshold decryption mechanism.

• We present a highly optimized software implementation of our proposed ThFHE scheme that builds upon the existing Torus FHE library and supports (distributed) decryption on highly resource-constrained ARM-based handheld devices. To the best of our knowledge, this is the first practically efficient implementation of any ThFHE scheme. Along the way, we implement several extensions to the Torus FHE library, including a Torus-based linear integer secret sharing subroutine to support ThFHE key sharing and distributed decryption for any threshold access structure.

We illustrate the efficacy of our proposal via an end-to-end use case involving encrypted computations over a real medical database, and distributed decryptions of the computed result on resource-constrained handheld devices.
Expand
Evgeny Alekseev, Andrey Bozhko
ePrint Report ePrint Report
The task of ensuring the required level of security of information systems in the adversary models with additional data obtained through side channels (a striking example of implementing threats in such a model is a differential power analysis) has become increasingly relevant in recent years. An effective protection method against side-channel attacks is masking all intermediate variables used in the algorithm with random values. At the same time, many algorithms use masking of different kinds, for example, Boolean, byte-wise, and arithmetic; therefore, a problem of switching between masking of different kinds arises. Switching between Boolean and arithmetic masking is well studied, while no solutions have been proposed for switching between masking of other kinds. This article recalls the requirements for switching algorithms and presents algorithms for switching between block-wise and arithmetic masking, which includes the case of switching between byte-wise and arithmetic masking.
Expand
David Chaum, Mario Larangeira, Mario Yaksetig
ePrint Report ePrint Report
The $\mathcal{S}_{leeve}$ construction proposed by Chaum et al. (ACNS'21) introduces an extra security layer for digital wallets by allowing users to generate a "back up key" securely nested inside the secret key of a signature scheme, i.e., ECDSA. The "back up key", which is secret, can be used to issue a "proof of ownership", i.e., only the real owner of this secret key can generate a single proof, which is based on the WOTS+ signature scheme. The authors of $\mathcal{S}_{leeve}$ proposed the formal technique for a single proof of ownership, and only informally outlined a construction to generalize it to multiple proofs. This work identifies that their proposed construction presents drawbacks, i.e., varying of signature size and signing/verifying computation complexity, limitation of linear construction, etc. Therefore we introduce WOTSwana, a generalization of $\mathcal{S}_{leeve}$, which is, more concretely, a more general scheme, i.e., an extra security layer that generates multiple proofs of ownership, and put forth a thorough formalization of two constructions: (1) one given by a linear concatenation of numerous WOTS+ private/public keys, and (2) a construction based on tree like structure, i.e., an underneath Merkle tree whose leaves are WOTS+ private/public key pairs. Furthermore, we present the security analysis for multiple proofs of ownership, showcasing that this work addresses the early mentioned drawbacks of the original construction. In particular, we extend the original security definition for $\mathcal{S}_{leeve}$. Finally, we illustrate an alternative application of our construction, by discussing the creation of an encrypted group chat messaging application.
Expand
F. Betül Durak, Serge Vaudenay, Melissa Chase
ePrint Report ePrint Report
On the one hand, the web needs to be secured from malicious activities such as bots or DoS attacks; on the other hand, such needs ideally should not justify services tracking people's activities on the web. Anonymous tokens provide a nice tradeoff between allowing an issuer to ensure that a user has been vetted and protecting the users' privacy. However, in some cases, whether or not a token is issued reveals a lot of information to an adversary about the strategies used to distinguish honest users from bots or attackers.

In this work, we focus on designing an anonymous token protocol between a client and an issuer (also a verifier) that enables the issuer to support its fraud detection mechanisms while preserving users' privacy. This is done by allowing the issuer to embed a hidden (from the client) metadata bit into the tokens. We first study an existing protocol from CRYPTO 2020 which is an extension of Privacy Pass from PoPETs 2018; that protocol aimed to provide support for a hidden metadata bit, but provided a somewhat restricted security notion. We demonstrate a new attack, showing that this is a weakness of the protocol, not just the definition. In particular, the metadata bit hiding is weak in the setting where the attacker can redeem some tokens and get feedback on what bit is extracted.

We then revisit the formalism of anonymous tokens with private metadata bit, consider the more natural notion, and design a scheme which achieves it. In order to design this new secure protocol, we base our construction on algebraic MACs instead of PRFs. Our security definitions capture a realistic threat model where adversaries could, through direct feedback or side channels, learn the embedded bit when the token is redeemed. Finally, we compare our protocol with one of the CRYPTO 2020 protocols which we obtain 20\% more efficient implementation.
Expand

22 November 2022

Linz, Österreich, 3 July - 6 July 2023
Event Calendar Event Calendar
Event date: 3 July to 6 July 2023
Expand

21 November 2022

Hao Yang, Shiyu Shen, Zhe Liu, Yunlei Zhao
ePrint Report ePrint Report
Private comparison schemes constructed on homomorphic encryption offer the noninteractive, output expressive and parallelizable features, and have advantages in communication bandwidth and performance. In this paper, we propose cuXCMP, which allows negative and float inputs, offers fully output expressive feature, and is more extensible and practical compared to XCMP (AsiaCCS 2018). Meanwhile, we introduce several memory-centric optimizations of the constant term extraction kernel tailored for CUDA-enabled GPUs. Firstly, we fully utilize the shared memory and present compact GPU implementations of NTT and INTT using a single block; Secondly, we fuse multiple kernels into one AKS kernel, which conducts the automorphism and key switching operation, and reduce the grid dimension for better resource usage, data access rate and synchronization. Thirdly, we precisely measure the IO latency and choose an appropriate number of CUDA streams to enable concurrent execution of independent operations, yielding a constant term extraction kernel with perfect latency hide, i.e., CTX. Combining these approaches, we boost the overall execution time to optimum level and the speedup ratio increases with the comparison scales. For one comparison, we speedup the AKS by 23.71×, CTX by 15.58×, and scheme by 1.83× (resp., 18.29×, 11.75×, and 1.42×) compared to C (resp., AVX512) baselines, respectively. For 32 comparisons, our CTX and scheme implementations outperform the C (resp., AVX512) baselines by 112.00× and 1.99× (resp., 81.53× and 1.51×).
Expand
Hart Montgomery, Jiahui Liu, Mark Zhandry
ePrint Report ePrint Report
Public verification of quantum money has been one of the central objects in quantum cryptography ever since Wiesner's pioneering idea of using quantum mechanics to construct banknotes against counterfeiting. So far, we do not know any publicly-verifiable quantum money scheme that is provably secure from standard assumptions.

In this work, we provide both negative and positive results for publicly verifiable quantum money.

**In the first part, we give a general theorem, showing that a certain natural class of quantum money schemes from lattices cannot be secure. We use this theorem to break the recent quantum money scheme of Khesin, Lu, and Shor. **In the second part, we propose a framework for building quantum money and quantum lightning we call invariant money which abstracts some of the ideas of quantum money from knots by Farhi et al.(ITCS'12). In addition to formalizing this framework, we provide concrete hard computational problems loosely inspired by classical knowledge-of-exponent assumptions, whose hardness would imply the security of quantum lightning, a strengthening of quantum money where not even the bank can duplicate banknotes.

**We discuss potential instantiations of our framework, including an oracle construction using cryptographic group actions and instantiations from rerandomizable functional encryption, isogenies over elliptic curves, and knots.
Expand
Abel C. H. Chen
ePrint Report ePrint Report
For avoding the attacks from quantum computing (QC), this study applies the post-quantum cryptography (PQC) methods without hidden subgroups to the security of vehicular communications. Due the mainstream technologies of PQC methods (i.e. lattice-based cryptography methods), the standard digital signature methods including Dilithium and Falcon have been discussed and compared. This study uses a queueing model to analyze the performance of these standard digital signature methods for selection decision-making.
Expand
Chaya Ganesh, Yashvanth Kondi, Claudio Orlandi, Mahak Pancholi, Akira Takahashi, Daniel Tschudi
ePrint Report ePrint Report
Zero-knowledge Succinct Non-interactive ARguments of Knowledge (zkSNARKs) are becoming an increasingly fundamental tool in many real-world applications where the proof compactness is of the utmost importance, including blockchains. A proof of security for SNARKs in the Universal Composability (UC) framework (Canetti, FOCS'01) would rule out devastating malleability attacks. To retain security of SNARKs in the UC model, one must show their simulation-extractability such that the knowledge extractor is both black-box and straight-line, which would imply that proofs generated by honest provers are non-malleable. However, existing simulation-extractability results on SNARKs either lack some of these properties, or alternatively have to sacrifice witness succinctness to prove UC security.

In this paper, we provide a compiler lifting any simulation-extractable NIZKAoK into a UC-secure one in the global random oracle model, importantly, while preserving the same level of witness succinctness. Combining this with existing zkSNARKs, we achieve, to the best of our knowledge, the first zkSNARKs simultaneously achieving UC-security and constant sized proofs.
Expand
Naoki Shibayama, Yasutaka Igarashi
ePrint Report ePrint Report
RAGHAV is a 64-bit block cipher proposed by Bansod in 2021. It supports 80-, and 128-bit secret keys. The designer evaluated its security against typical attack, such as differential cryptanalysis, linear cryptanalysis, and so on. On the other hand, it has not been reported the security of RAGHAV against higher order differential attack, which is one of the algebraic attacks. In this paper, we applied higher order differential cryptanalysis to RAGHAV. As a results, we found a new full-round higher order characteristic of RAGHAV using 1-st order differential. Exploiting this characteristic, we also show that the full-round of RAGHAV is attackable by distinguishing attack with 2 chosen plaintexts.
Expand
James Smith
ePrint Report ePrint Report
Sharing a secret efficiently amongst a group of participants is not easy since there is always an adversary / eavesdropper trying to retrieve the secret. In secret sharing schemes, every participant is given a unique share. When the desired group of participants come together and provide their shares, the secret is obtained. For other combinations of shares, a garbage value is returned. A threshold secret sharing scheme was proposed by Shamir and Blakeley independently. In this (n,t) threshold secret sharing scheme, the secret can be obtained when at least $t$ out of $n$ participants contribute their shares. This paper proposes a novel algorithm to reveal the secret only to the subsets of participants belonging to the access structure. This scheme implements totally generalized ideal secret sharing. Unlike threshold secret sharing schemes, this scheme reveals the secret only to the authorized sets of participants, not any arbitrary set of users with cardinality more than or equal to $t$. Since any access structure can be realized with this scheme, this scheme can be exploited to implement various access priorities and access control mechanisms. A major advantage of this scheme over the existing ones is that the shares being distributed to the participants is totally independent of the secret being shared. Hence, no restrictions are imposed on the scheme and it finds a wider use in real world applications.
Expand
James Smith
ePrint Report ePrint Report
The recent advent of cloud computing and IoT has made it imperative to store huge amount of data in the cloud servers. Enormous amount of data is also stored in the servers of big organizations. In organizations, it is not desirable for every member to have equal privileges to access the stored data. Threshold secret sharing schemes are widely used for implementing such access control mechanisms. The access privileges of the members also vary from one data packet to another. While implementing such dynamic access structures using threshold secret sharing schemes, the key management becomes too complex to be implemented in real time. Furthermore, the storage of the users’ access privileges requires exponential space (O($2^n$)) and its implementation requires exponential time. In this paper, the algorithms proposed to tackle the problems of priority based access control and authentication require a space complexity of O($n^2$) and a time complexity of O($n$), where n is the number of users. In the practical scenario, such space can easily be provided on the servers and the algorithms can run smoothly in real time.
Expand
Shayan Hamidi Dehshali, Seyed Mahdi Hosseini, Soheil Zibakhsh Shabgahi, Behnam Bahrak
ePrint Report ePrint Report
Off-chain payment channels were introduced as one of the solutions to the blockchain scalability problem. The channels shape a network, where parties have to lock funds for their creation. A channel is expected to route a limited number of transactions before it becomes unbalanced, when all of the funds are devoted to one of the parties. Since an on-chain transaction is often necessary to establish, rebalance, or close a channel, the off-chain network is bounded to the throughput of the blockchain. In this paper, we propose a mathematical model to formulate limitation on the throughput of an off-chain payment network. As a case study, we show the limitation of the Lightning Network, in comparison with popular banking systems. Our results show that theoretically, the throughput of the Lightning Network can reach the order of 10000 transactions per second, close to the average throughput of centralized banking systems.
Expand
Rainer Urian, Raphael Schermann
ePrint Report ePrint Report
Classic McEliece is a code based encryption scheme and candidate of the NIST post quantum contest. Implementing Classic McEliece on smart card chips is a challenge, because those chips have only a very limited amount of RAM. Decryption is not an issue because the cryptogram size is short and the decryption algorithm can be implemented using very few RAM. However key generation is a concern, because a large binary matrix must be inverted. In this paper, we show how key generation can be done on smart card chips with very little RAM resources. This is accomplished by modifying the key generation algorithm and splitting it in a security critical part and a non security critical part. The security critical part can be implemented on the smart card controller. The non critical part contains the matrix inversion and will be done on a connected host.
Expand
Laasya Bangalore, Rishabh Bhadauria, Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
ePrint Report ePrint Report
Zero-knowledge proofs allow a prover to convince a verifier of a statement without revealing anything besides its validity. A major bottleneck in scaling sub-linear zero-knowledge proofs is the high space requirement of the prover, even for NP relations that can be verified in a small space.

In this work, we ask whether there exist complexity-preserving (i.e. overhead w.r.t time and space are minimal) succinct zero-knowledge arguments of knowledge with minimal assumptions while making only black-box access to the underlying primitives. We design the first such zero-knowledge system with sublinear communication complexity (when the underlying $\textsf{NP}$ relation uses non-trivial space) and provide evidence why existing techniques are unlikely to improve the communication complexity in this setting. Namely, for every NP relation that can be verified in time T and space S by a RAM program, we construct a public-coin zero-knowledge argument system that is black-box based on collision-resistant hash-functions (CRH) where the prover runs in time $\widetilde{O}(T)$ and space $\widetilde{O}(S)$, the verifier runs in time $\widetilde{O}(T/S+S)$ and space $\widetilde{O}(1)$ and the communication is $\widetilde{O}(T/S)$, where $\widetilde{O}()$ ignores polynomial factors in $\log T$ and $\kappa$ is the security parameter. As our construction is public-coin, we can apply the Fiat-Shamir heuristic to make it non-interactive with sample communication/computation complexities.

Furthermore, we give evidence that reducing the proof length below $\widetilde{O}(T/S)$ will be hard using existing symmetric-key based techniques by arguing the space-complexity of constant-distance error correcting codes.
Expand
◄ Previous Next ►