International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

30 August 2024

Doreen Riepel, Marloes Venema, Tanya Verma
ePrint Report ePrint Report
Attribute-based encryption (ABE) is a powerful primitive that has found applications in important real-world settings requiring access control. Compared to traditional public-key encryption, ABE has established itself as a considerably more complex primitive that is additionally less efficient to implement. It is therefore paramount that the we can simplify the design of ABE schemes that are efficient, provide strong security guarantees, minimize the complexity in their descriptions and support all practical features that are desirable for common real-world settings. One of such practical features that is currently still difficult to achieve is multi-authority support. Motivated by NIST's ongoing standardization efforts around multi-authority schemes, we put a specific focus on simplifying the support of multiple authorities in the design of schemes.

To this end, we present ISABELLA, a framework for constructing pairing-based ABE with advanced functionalities under strong security guarantees. At a high level, our approach builds on various works that systematically and generically construct ABE schemes by reducing the effort of proving security to a simpler yet powerful ''core'' called pair encodings. To support the amount of adaptivity required by multi-authority ABE, we devise a new approach to designing schemes from pair encodings, while still being able to benefit from the advantages that pair encodings provide. As a direct result of our framework, we obtain various improvements for existing (multi-authority) schemes as well as new schemes.
Expand
Benjamin E. Diamond, Angus Gruen
ePrint Report ePrint Report
A linear error-correcting code exhibits proximity gaps if each affine line of words either consists entirely of words which are close to the code or else contains almost no such words. In this short note, we prove that for each linear code which exhibits proximity gaps within the unique decoding radius, that code's interleaved code also does. Combining our result with an argument suggested to us by Angeris, Evans and Roh ('24), we extend those authors' sharpening of the tensor-based proximity gap of Diamond and Posen (Commun. Cryptol. '24) up to the unique decoding radius, at least in the Reed–Solomon setting.
Expand
Lukasz Chmielewski, Lubomír Hrbáček
ePrint Report ePrint Report
This short note describes an update to the sca25519 library, an ECC implementation computing the X25519 key-exchange protocol on the Arm Cortex-M4 microcontroller. The sca25519 software came with extensive mitigations against various side-channel and fault attacks and was, to our best knowledge, the first to claim affordable protection against multiple classes of attacks that are motivated by distinct real-world application scenarios.

This library is protected against various passive and active side-channel threats. However, both classes of attacks were considered separately, i.e., combining the attacks is considered out-of-scope because to successfully execute such a combined attack, the adversary would need to be very powerful (e.g., a very well-equipped security laboratory). Protection against such powerful adversaries is considered infeasible without using dedicated protected hardware with which Arm Cortex-M4 is not equipped.

However, there exists a particular class of easy and cheap active attacks: they are called tearing, and they are well known in the smartcard context. In this paper, we extend the scope of the library to also consider a combination of tearing and side-channel attacks. In this note, we show how we can mitigate such a combination by performing a small code update. The update does not affect the efficiency of the library.
Expand
Zhuang Shan, Leyou Zhang, Qing Wu, Qiqi Lai, Fuchun Guo
ePrint Report ePrint Report
Privacy set intersection (PSI) and private information retrieval (PIR) are important areas of research in privacy protection technology. One of the key tools for both is the oblivious pseudorandom function (OPRF). Currently, existing oblivious pseudorandom functions either focus solely on efficiency without considering quantum attacks, or are too complex, resulting in low efficiency. The aim of this paper is to achieve a balance: to ensure that the oblivious pseudorandom function can withstand quantum attacks while simplifying its structure as much as possible. This paper constructs an efficient oblivious pseudorandom function based on the ideal lattice hardness assumption and the oblivious transfer (OT) technique by Chase and Miao (CRYPTO 2020), and also constructs PSI and PIR.
Expand
Michael Brand, Benoît Poletti
ePrint Report ePrint Report
We describe designs for an electronic wallet, meant for the housing of official government documents, which solves the problem of displaying document data to untrusted parties (e.g., in order to allow users to prove that they are above the drinking age). The wallet attains this goal by employing Zero-Knowledge Proof technologies, ascertaining that nothing beyond the intended information is ever shared. In order to be practically applicable, the wallet has to meet many additional constraints, such as to be usable in offline scenarios, to employ only widely-accessible communication methods which, themselves, must not impinge on the user’s privacy, and to be constructed solely over standard, widely-studied cryptographic algorithms, offering appropriately high levels of cryptographic security. We explain how our design was able to successfully meet all such additional constraints.
Expand
Shuaishuai Li, Cong zhang, Dongdai Lin
ePrint Report ePrint Report
Secure multiparty computation (MPC) protocols enable $n$ parties, each with private inputs, to compute a given function without leaking information beyond the outputs. One of the main approaches to designing efficient MPC protocols is to use secret sharing. In general, secret sharing based MPC contains three phases: input sharing, circuit evaluation, and output recovery. If the adversary corrupts at most $t$ parties, the protocol typically uses $(t,n)$ threshold secret sharing to share the inputs. In this work, we consider a weaker variant of threshold secret sharing called lazy threshold secret sharing (or simply lazy sharing) and show that - Lazy sharing can serve as a viable alternative to threshold secret sharing in MPC without compromising security. - Lazy sharing could be generated more efficiently than threshold secret sharing. As a result, replacing threshold secret sharing with lazy sharing can lead to a more efficient input sharing phase. Moreover, we propose that the efficiency of the circuit evaluation phase can also be further improved. To support this claim, we apply lazy sharing to several state-of-the-art MPC protocols and analyze the efficiency gain in various settings. These protocols include the GMW protocol (Goldreich et al., STOC 1987), the AFLNO protocol (Araki et al., CCS 2016), and the SPDZ protocol (Damg{\aa}rd et al., CRYPTO 2012). By doing so, we analyze the efficiency gains in various settings and highlight the advantages of incorporating lazy sharing into MPC protocols.
Expand
Arghya Bhattacharjee, Ritam Bhaumik, Daniel Collins, Mridul Nandi
ePrint Report ePrint Report
In this work, we examine online authenticated encryption with variable expansion. We follow a notion where both encryption and decryption are online, and security is ensured in the RUP (Release of Unverified Plaintext) setting. Then we propose a generic way of obtaining an online authenticated encryption mode from a tweakable online encryption mode based on the encode-then-encipher paradigm (Bellare and Rogaway, Asiacrypt 2000). To instantiate our generic scheme, we start with proposing a provably-secure tweakable online encryption mode called t-OleF, a tweakable version of OleF (Bhaumik and Nandi, ToSC 2016(2)), and then plug it into our generic scheme to obtain OlÆF, a provably-secure online authenticated encryption mode. As an application, we propose a primitive we call a bidirectional online channel suited for communication between lightweight devices.
Expand
Maximilian Pursche, Nikolai Puch, Sebastian N. Peters, Michael P. Heinl
ePrint Report ePrint Report
Embedded systems are flexible and cost-effective and thus have found a use case in almost every part of our daily lives. Due to their widespread use, they have also become valuable targets for cyber attacks. However, translating cutting-edge cyber security from servers and desktops to the embedded realm can be challenging due to the limited com- putational power and memory of embedded devices. Although quantum computing is still in early research and development, it threatens to break conventional asymmetric cryptography which is a key component of most secure applications currently in use. Given the long lifespan of embedded devices, which can last for decades, research must find solutions for post-quantum (PQ) security rather sooner than later. The field of post- quantum cryptography (PQC) received significant attention in 2019 when the National Institute for Standards and Tech- nology (NIST) launched a competition to find suitable PQC algorithms. During the PQC competition, the applicability of novel PQC algorithms to embedded devices was an important topic that garnered significant research interest. We provide a survey of the latest research regarding PQC for embedded systems. However, rather than focusing on PQC algorithms, our study revolves around practical use cases intending to help embedded developers understand the current state of research from an integration perspective.
Expand
Shaoquan Jiang
ePrint Report ePrint Report
With the rapid advance in quantum computing, quantum security is now an indispensable property for any cryptographic system. In this paper, we study how to prove the security of a complex cryptographic system in the quantum random oracle model. We first give a variant of Zhandry's compressed quantum random oracle (${\bf CStO}$), called compressed quantum random oracle with adaptive special points ({\bf CStO}$_s$). Then, we extend the on-line extraction technique of Don et al (EUROCRYPT'22) from {\bf CStO} to ${\bf CStO}_s$. We also extend the random experiment technique of Liu and Zhandry (CRYPTO'19) for extracting the ${\bf CStO}$ query that witnesses the future adversarial output. With these preparations, a systematic security proof in the quantum random oracle model can start with a random {\bf CStO} experiment (that extracts the witness for the future adversarial output) and then convert this game to one involving ${\bf CStO}_s$. Next, the on-line extraction technique for ${\bf CStO}_s$ can be applied to extract the witness for any on-line commitment. With this strategy, we give a security proof of our recent compact multi-signature framework that is converted from any weakly secure linear ID scheme. We also prove the quantum security of our recent lattice realization of this linear ID scheme, by iteratively applying the weakly collapsing protocol technique of Liu and Zhandry (CRYPTO 2019). Combining these two results, we obtain the first quantum security proof for a compact multi-signature.
Expand
Hua-Lei Yin
ePrint Report ePrint Report
One-way functions are fundamental to classical cryptography and their existence remains a longstanding problem in computational complexity theory. Recently, a provable quantum one-way function has been identified, which maintains its one-wayness even with unlimited computational resources. Here, we extend the mathematical definition of functions to construct a generalized one-way function by virtually measuring the qubit of provable quantum one-way function and randomly assigning the corresponding measurement outcomes with identical probability. Remarkably, using this generalized one-way function, we have developed an unconditionally secure key distribution protocol based solely on classical data processing, which can then utilized for secure encryption and signature. Our work highlights the importance of information in characterizing quantum systems and the physical significance of the density matrix. We demonstrate that probability theory and randomness are effective tools for countering adversaries with unlimited computational capabilities.
Expand
Hua-Lei Yin
ePrint Report ePrint Report
Key distribution plays a fundamental role in cryptography. Currently, the quantum scheme stands as the only known method for achieving unconditionally secure key distribution. This method has been demonstrated over distances of 508 and 1002 kilometers in the measurement-device-independent and twin-field configurations, respectively. However, quantum key distribution faces transmission distance issues and numerous side channel attacks since the basic physical picture requires the use of quantum channels between users. Even when quantum repeater and quantum constellation are used, commercializing quantum cryptography on a large scale remains unattainable due to the considerable expense and significant technical hurdles associated with establishing a global quantum network and facilitating mobile quantum communication. Here, by discovering the provable quantum one-way function, we propose another key distribution scheme with unconditional security, named probability key distribution, that promises users between any two distances to generate a fixed and high secret key rate. There are no quantum channels for exchanging quantum signals between two legitimate users. Non-local entangled states can be generated, identified and measured in the equivalent virtual protocol and can be used to extract secret keys. We anticipate that this discovery presents a paradigm shift in achieving unconditionally secure cryptography, thereby facilitating its widespread application on a global scale.
Expand
Pascal Hammer, Veronika Krause, Tobias Probst, Jürgen Mottok
ePrint Report ePrint Report
In times of digitalization, the encryption and signing of sensitive data is becoming increasingly important. These cryptographic processes require large quantities of high-quality random numbers. Which is why a high-performance random number generator (RNG) is to be developed. For this purpose, existing concepts of RNGs and application standards are first analyzed. The proposed approach is to design a physical true random number generator (PTRNG) with a high output of random numbers. Based on this, the development begins with the analog part of the RNG, the noise signal source and a suitable amplifier for the analog noise signal. Therefore, a special noise diode from Noisecom and an amplifier from NXP were chosen and analyzed in different measurements. From the results of the measurements, it can be concluded that both components are suitable for use in the RNG.
Expand
Cong Zhang, Yu Chen, Weiran Liu, Liqiang Peng, Meng Hao, Anyu Wang, Xiaoyun Wang
ePrint Report ePrint Report
Private set union (PSU) is a cryptographic protocol that allows two parties to compute the union of their sets without revealing anything else. Despite some efficient PSU protocols that have been proposed, they mainly focus on the balanced setting, where the sets held by the parties are of similar size. Recently, Tu et al. (CCS 2023) proposed the first unbalanced PSU protocol which achieves sublinear communication complexity in the size of the larger set.

In this paper, we are interested in improving the efficiency of the unbalanced PSU protocol. We find that oblivious key-value store (OKVS) data structure plays an essential role in the most recently proposed PSU constructions and formalize unbalanced PSU as an OKVS decoding process with sublinear communication. Our key insight lies in when OKVS satisfies sparsity property, obtaining the necessary decoding information precisely aligns with the batch private information retrieval (BatchPIR) problem. We give two concrete constructions of unbalanced PSU protocols based on different OKVS encoding strategies. The first is based on oblivious PRF (OPRF) and a newly introduced cryptographic protocol called permuted private equality test, while the second is based on re-randomizable public key encryption. Both our two constructions achieve sublinear communication complexity in the size of the larger set.

We implement our two unbalanced PSU protocols and compare them with the state-of-the-art unbalanced PSU of Tu et al. Experiments show that our protocols achieve a $1.3-5.6\times $ speedup in running time and $2.1-11.8\times$ shrinking in communication cost, depending on set sizes and network environments.
Expand
Akiko Inoue, Tetsu Iwata, Kazuhiko Minematsu
ePrint Report ePrint Report
Clarifying the robustness of authenticated encryption (AE) schemes, such as security under nonce misuse or Release of Unverified Plaintext (RUP), is critically important due to the extensive use of AEs in real-world applications. We present a comprehensive analysis of the robustness of well-known standards, namely GCM, CCM, and OCB3. Despite many existing studies, we uncovered several robustness properties for them that were not known in the literature. In particular, we show that both GCM and CCM maintain authenticity under RUP. Moreover, CCM keeps this feature even if a nonce is misused. Together with existing analysis, our work gives a complete picture of the robustness of these standards for the first time. Our results also imply several new robust AE schemes based on GCM and CCM.
Expand
Anqi Tian, Peifang Ni, Yingzi Gao, Jing Xu
ePrint Report ePrint Report
Payment Channel Networks (PCNs) have been highlighted as viable solutions to address the scalability issues in current permissionless blockchains. They facilitate off-chain transactions, significantly reducing the load on the blockchain. However, the extensive reuse of multi-hop routes in the same direction poses a risk of channel depletion, resulting in involved channels becoming unidirectional or even closing, thereby compromising the sustainability and scalability of PCNs. Even more concerning, existing rebalancing protocol solutions heavily rely on trust assumptions and scripting languages, resulting in compromised universality and reliability.

In this paper, we present Horcrux, a universal and efficient multi-party virtual channel protocol without relying on extra trust assumptions, scripting languages, or the perpetual online requirement. Horcrux fundamentally addresses the channel depletion problem using a novel approach termed flow neutrality, which minimizes the impact on channel balance allocations during multi-hop payments (MHPs). Additionally, we formalize the security properties of Horcrux by modeling it within the Global Universal Composability framework and provide a formal security proof.

We implement Horcrux on a real Lightning Network dataset, comprising 10,529 nodes and 38,910 channels, and compare it to the state-of-the-art rebalancing schemes such as Shaduf [NDSS'22], Thora [CCS'22], and Revive [CCS'17]. The experimental results demonstrate that (1) the entire process of Horcrux costs less than 1 USD, significantly lower than Shaduf; (2) Horcrux achieves a $12\%$-$30\%$ increase in payment success ratio and reduces user deposits required for channels by $70\%$-$91\%$; (3) the performance of Horcrux improves by $1.2x$-$1.5x$ under long-term operation; and (4) Horcrux maintains a nearly zero channel depletion rate, whereas both Revive and Shaduf result in thousands of depleted channels.
Expand
Juan Carlos Ku-Cauich, Javier Diaz-Vargas
ePrint Report ePrint Report
We are using the extended Maiorana-McFarland construction to create new bent functions. When we start with a bent function of dimension s-r, we can produce a new function of dimension s+r while ensuring that its balance is limited to the set of vectors with an even Hamming weight in its domain. We also compare this approach with the case where r=1 and apply it multiple times. Finally, we provide an algorithm as an example, focusing on the case where r=2 and another algorithm using r=1 two times.
Expand
Debrup Chakraborty, Sebati Ghosh, Cuauhtemoc Mancillas Lopez, Palash Sarkar
ePrint Report ePrint Report
A fixed length tweakable enciphering scheme (TES) is the appropriate cryptographic functionality for low level disk encryption. Research on TES over the last two decades have led to a number of proposals many of which have already been implemented using FPGAs. This paper considers the FPGA implementations of two more recent and promising TESs, namely AEZ and FAST. The relevant architectures are described and simulation results on the Xilinx Virtex 5 and Virtex 7 FPGAs are presented. For comparison, two IEEE standard schemes, XCB and EME2 are considered. The results indicate that FAST outperforms the other schemes making it a serious candidate for future incorporation by disk manufacturers and standardisation bodies.
Expand
Kai Hu, Trevor Yap
ePrint Report ePrint Report
Modular addition is often the most complex component of typical Addition-Rotation-XOR (ARX) ciphers, and the division property is the most effective tool for detecting integral distinguishers. Thus, having a precise division property model for modular addition is crucial in the search for integral distinguishers in ARX ciphers.

Current division property models for modular addition either (a) express the operation as a Boolean circuit and apply standard propagation rules for basic operations (COPY, XOR, AND), or (b) treat it as a sequence of smaller functions with carry bits, modeling each function individually. Both approaches were originally proposed for the two-subset bit-based division property (2BDP), which is theoretically imprecise and may overlook some balanced bits.

Recently, more precise versions of the division property, such as parity sets, three-subset bit-based division property without unknown subsets (3BDPwoU) or monomial prediction (MP), and algebraic transition matrices have been proposed. However, little attention has been given to modular addition within these precise models.

The propagation rule for the precise division property of a vectorial Boolean function $\boldsymbol{f}$ requires that $\boldsymbol{u}$ can propagate to $\boldsymbol{v}$ if and only if the monomial $\pi_{\boldsymbol{u}}({\boldsymbol{x}})$ appears in $\pi_{\boldsymbol{v}}( \boldsymbol{f} )$. Braeken and Semaev (FSE 2005) studied the algebraic structure of modular addition and showed that for $\boldsymbol{x} \boxplus \boldsymbol{y} = \boldsymbol{z}$, the monomial $\pi_{\boldsymbol{u}}(\boldsymbol{x})\pi_{\boldsymbol{v}}(\boldsymbol{v})$ appears in $\pi_{\boldsymbol{w}}(\boldsymbol{w})$ if and only if $\boldsymbol{u} + \boldsymbol{v} = \boldsymbol{w}$. Their theorem directly leads to a precise division property model for modular addition. Surprisingly, this model has not been applied in division property searches, to the best of our knowledge.

In this paper, we apply Braeken and Semaev's theorem to search for integral distinguishers in ARX ciphers, leading to several new results. First, we improve the state-of-the-art integral distinguishers for all variants of the Speck family, significantly enhancing search efficiency for Speck-32/48/64/96 and detecting new integral distinguishers for Speck-48/64/96/128. Second, we determine the exact degrees of output bits for $7$-round Speck-$32$ and all/16/2 output bits for $2/3/4$-round Alzette for the first time. Third, we revisit the choice of rotation parameters in Speck instances, providing a criterion that enhances resistance against integral distinguishers. Additionally, we offer a simpler proof for Braeken and Semaev's theorem using monomial prediction, demonstrating the potential of division property methods in the study of Boolean functions.

We hope that the proposed methods will be valuable in the future design of ARX ciphers.
Expand
George Teseleanu
ePrint Report ePrint Report
In 2023, Mfungo et al. presented an image encryption scheme that relies on a series of diffusion techniques and uses a chaotic map to generate three secret keys. Note that two out of three keys are dynamically generated based on the size of the original image, while the remaining key is static. The authors claim that their proposal offers $149$ bits of security. Unfortunately, we found a chosen plaintext attack that requires $2$ oracle queries and has a worse case complexity of $\mathcal O(2^{32})$. If the attacker has access to $1$ encryption oracle query and $1$ decryption oracle query, we can lower the complexity to $\mathcal O(2^{18.58})$. Encrypting an image with Mfungo et al.'s scheme has a worst case complexity of $\mathcal O(2^{33})$. Therefore, both our attacks are faster than encrypting an image. Our attacks are feasible because the dynamic keys remain unchanged for different plaintext images of the same size, and the static key remains the same for all images.
Expand
Yan Jiang, Youwen Zhu, Jian Wang, Yudi Zhang
ePrint Report ePrint Report
Identity-based threshold signature (IDTS) enables the generation of valid signatures without revealing cryptographic keys in the signing process. While current protocols have achieved much progress in their efficiency, many schemes easily suffer from denial-of-service attacks in which misbehaving parties could keep from generating signatures without being caught. The identifiable abort property is designed to withstand such an attack in some recent IDTS protocols. However, all these schemes require many rounds of interaction for the resulting signature or utilize cryptographic techniques, which have a high complexity. In this study, we put forward a novel IDTS protocol that can achieve identifiable abort and resist arbitrary collusion attacks. Precisely, this ensures that corrupted parties are responsible in case of failure and cannot jointly obtain the input of honest parties. Moreover, we present the ideal IDTS functionality and provide the provable security of the proposed protocol with the global random oracle model. Our scheme has non-interactive signing, compatibility with the offline/online settings, and practical efficiency for the online phase. Finally, we give theoretical analyses and experimental results of our solution, showing that the signing time is less than two milliseconds and that the scheme is suitable for resource-constrained settings.
Expand
◄ Previous Next ►