International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

18 September 2024

Alexander Russell, Qiang Tang, Jiadong Zhu
ePrint Report ePrint Report
The Feistel construction is a fundamental technique for building pseudorandom permutations and block ciphers. This paper shows that a simple adaptation of the construction is resistant, even to algorithm substitution attacks---that is, adversarial subversion---of the component round functions. Specifically, we establish that a Feistel-based construction with more than $337n/\log(1/\epsilon)$ rounds can transform a subverted random function---which disagrees with the original one at a small fraction (denoted by $\epsilon$) of inputs---into an object that is \emph{crooked-indifferentiable} from a random permutation, even if the adversary is aware of all the randomness used in the transformation. Here, $n$ denotes the length of both the input and output of the round functions that underlie the Feistel cipher. We also provide a lower bound showing that the construction cannot use fewer than $2n/\log(1/\epsilon)$ rounds to achieve crooked-indifferentiable security.
Expand
Yanqi Gu, Stanislaw Jarecki, Pawel Kedzior, Phillip Nazarian, Jiayu Xu
ePrint Report ePrint Report
We revisit the notion of threshold Password-Authenticated Key Exchange (tPAKE), and we extend it to augmented tPAKE (atPAKE), which protects password information even in the case all servers are compromised, except for allowing an (inevitable) offline dictionary attack. Compared to prior notions of tPAKE this is analogous to replacing symmetric PAKE, where the server stores the user's password, with an augmented (or asymmetric) PAKE, like OPAQUE [JKX18], where the server stores a password hash, which can be used only as a target in an offline dictionary search for the password. An atPAKE scheme also strictly improves on the security of an aPAKE, by secret-sharing the password hash among a set of servers. Indeed, our atPAKE protocol is a natural realization of threshold OPAQUE.

We formalize atPAKE in the framework of Universal Composability (UC), and show practical ways to realize it. All our schemes are generic compositions which interface to any aPAKE used as a sub-protocol, making them easier to adopt. Our main scheme relies on threshold Oblivious Pseudorandom Function (tOPRF), and our independent contribution fixes a flaw in the UC tOPRF notion of [JKKX17] and upgrades the tOPRF scheme therein to achieve the fixed definition while preserving its minimal cost and round complexity. The technique we use enforces implicit agreement on arbitrary context information within threshold computation, and it is of general interest.
Expand
Alexander Bienstock, Yevgeniy Dodis, Paul Rösler, Daniel Wichs
ePrint Report ePrint Report
Forward-Secure Key-Encapsulation Mechanism (FS-KEM; Canetti et al. Eurocrypt 2003) allows Alice to encapsulate a key $k$ to Bob for some time $t$ such that Bob can decapsulate it at any time $t'\leq t$. Crucially, a corruption of Bob's secret key after time $t$ does not reveal $k$.

In this work, we generalize and extend this idea by also taking Post-Compromise Security (PCS) into account and call it Interval Key-Encapsulation Mechanism (IKEM). Thus, we do not only protect confidentiality of previous keys against future corruptions but also confidentiality of future keys against past corruptions. For this, Bob can regularly renew his secret key and inform others about the corresponding public key. IKEM enables Bob to decapsulate keys sent to him over an interval of time extending into the past, in case senders have not obtained his latest public key; forward security only needs to hold with respect to keys encapsulated before this interval. This basic IKEM variant can be instantiated based on standard KEM, which we prove to be optimal in terms of assumptions as well as ciphertext and key sizes.

We also extend this notion of IKEM for settings in which Bob decapsulates (much) later than Alice encapsulates (e.g., in high-latency or segmented networks): if a third user Charlie forwards Alice's ciphertext to Bob and, additionally, knows a recently renewed public key of Bob's, Charlie could re-encrypt the ciphertext for better PCS. We call this extended notion IKEMR. Our first IKEMR construction based on trapdoor permutations has (almost) constant sized ciphertexts in the number of re-encryptions; and our second IKEMR construction based on FS-PKE has constant sized public keys in the interval size.

Finally, to bypass our lower bound on the IKEM(R) secret key size, which must be linear in the interval size, we develop a new Interval RAM primitive with which Bob only stores a constant sized part of his secret key locally, while outsourcing the rest to a (possibly adversarial) server.

For all our constructions, we achieve security against active adversaries. For this, we obtain new insights on Replayable CCA security for KEM-type primitives, which might be of independent interest.
Expand
Wouter Castryck, Mingjie Chen, Riccardo Invernizzi, Gioella Lorenzon, Frederik Vercauteren
ePrint Report ePrint Report
We present a key recovery attack on SQIsign2D-East that reduces its security level from $\lambda$ to $\lambda/2$. We exploit the fact that each signature leaks a Legendre symbol modulo the secret degree of the private key isogeny. About $\lambda/2$ signatures are enough for these Legendre symbols to fully determine the secret degree, which can then be recovered by exhaustive search over a set of size $O(2^{\lambda/2})$. Once the degree is known, the private key isogeny itself can be found, again by exhaustive search, in time $\tilde{O}(2^{\lambda/2})$.

We also present a new version of the protocol which does not leak any such information about the private key and show that our modified protocol is more efficient than the original one. Finally, we give a security analysis as well as a new proof of security.
Expand
Cong Zhang, Keyu Ji, Taiyu Wang, Bingsheng Zhang, Hong-Sheng Zhou, Xin Wang, Kui Ren
ePrint Report ePrint Report
Ever since the seminal work of Diffie and Hellman, cryptographic (cyclic) groups have served as a fundamental building block for constructing cryptographic schemes and protocols. The security of these constructions can often be based on the hardness of (cyclic) group-based computational assumptions. Then, the generic group model (GGM) has been studied as an idealized model (Shoup, EuroCrypt 1997), which justifies the hardness of many (cyclic) group-based assumptions and shows the limits of some group-based cryptosystems. We stress that, the importance of the length of group encoding, either in a concrete group-based construction or assumption, or in the GGM, has not been studied.

In this work, we initiate a systematic study on the complexity of cryptographic groups and generic group models, varying in different lengths of group encodings, and demonstrate evidences that ``the length matters''. More concretely, we have the following results:

-- We show that there is no black-box/relativizing reduction from the CDH-secure groups (i.e., over such groups, the computational Diffie-Hellman assumption holds) with shorter encodings, to the CDH-secure groups with longer encodings, within the same security parameter. More specifically, given any arbitrary longer CDH-secure group, it is impossible to generically shorten the group encoding and obtain a shorter CDH-secure group within the same group order.

-- We show that there is a strict hierarchy of the GGMs with different lengths of encodings. That is, in the framework of indifferentiability, the shorter GGM is strictly stronger than the longer ones, even in the presence of computationally bounded adversaries.
Expand
Avi Mizrahi, Noam Koren, Ori Rottenstreich, Yuval Cassuto
ePrint Report ePrint Report
Merkle trees play a crucial role in blockchain networks in organizing network state. They allow proving a particular value of an entry in the state to a node that maintains only the root of the Merkle trees, a hash-based signature computed over the data in a hierarchical manner. Verification of particular state entries is crucial in reaching a consensus on the execution of a block where state information is required in the processing of its transactions. For instance, a payment transaction should be based on the balance of the two involved accounts. The proof length affects the network communication and is typically logarithmic in the state size. In this paper, we take advantage of typical transaction characteristics for better organizing Merkle trees to improve blockchain network performance. We focus on the common transaction processing where Merkle proofs are jointly provided for multiple accounts. We first provide lower bounds for the communication cost that are based on the distribution of accounts involved in the transactions. We then describe algorithms that consider traffic patterns for significantly reducing it. The algorithms are inspired by various coding methods such as Huffman coding, partition and weight balancing. We also generalize our approach towards the encoding of smart contract transactions that involve an arbitrary number of accounts. Likewise, we rely on real blockchain data to show the savings allowed by our approach. The experimental evaluation is based on transactions from the Ethereum network and demonstrates cost reduction for both payment transactions and smart contract transactions.
Expand
Maha Allouzi, Arefeh Rahaei
ePrint Report ePrint Report
Cryptography is a crucial method for ensuring the security of communication and data transfers across networks. While it excels on devices with abundant resources, such as PCs, servers, and smartphones, it may encounter challenges when applied to resource-constrained Internet of Things (IoT) devices like Radio Frequency Identification (RFID) tags and sensors. To address this issue, a demand arises for a lightweight variant of cryptography known as lightweight cryptography (LWC). In developing any cryptographic algorithm, the substitution box (S-box) is a fundamental component, providing nonlinear functionality between inputs and outputs. Researchers have TentLogiX diverse S-box designs tailored for various applications, but only a few manage to balance the trade-offs among cost, performance, and security, particularly in the context of resource-constrained IoT devices. This paper delves into the realm of S-boxes employed in popular LWC algorithms, categorizing them by their input–output bit sizes and elucidating their strengths and limitations. The focus then shifts to a novel 5-bit S-box design, utilizing chaotic mapping theory to introduce a randomized behavior. Subsequently, the paper proposed TentLogiX a 5-bit S-box, constructed based on compound chaotic system, tent-logistic systems, which has better chaotic performance and wider sequences and explores its security robustness through various cryptanalysis techniques, including bijective analysis, nonlinearity assessment, linearity evaluation, and differential cryptanalysis. The paper concludes by presenting a thorough comparison that underscores the superiority of the TentLogiX 5-bit S-box over its 5-bit counterparts.
Expand
Thomas Szymkowiak, Endrit Isufi, Markku-Juhani Saarinen
ePrint Report ePrint Report
The RISC-V Vector Cryptography Extensions (Zvk) were ratified in 2023 and integrated into the main ISA manuals in 2024. These extensions support high-speed symmetric cryptography (AES, SHA2, SM3, SM4) operating on the vector register file and offer significant performance improvements over scalar cryptography extensions (Zk) due to data parallelism. As a ratified extension, Zvk is supported by compiler toolchains and is already being integrated into popular cryptographic middleware such as OpenSSL. We report on Marian, the first open-source hardware implementation of a vector processor with the Zvk extensions. The design is based on the PULP ``Ara'' vector unit, which itself is an extension of the popular CVA6 processor. The implementation is in SystemVerilog and has been tested using Virtex Ultrascale+ FPGA prototyping, with a planned tapeout targeting a 22nm process node. We offer an analysis of the architectural requirements that vector cryptography imposes on a processor, as well as the initial estimates of performance and area for our implementation.
Expand
Hari Krishnan P. Anilkumar, Varun Narayanan, Manoj Prabhakaran, Vinod M. Prabhakaran
ePrint Report ePrint Report
A significant body of work in information-theoretic cryptography has been devoted to the fundamental problem of understanding the power of randomness in private computation. This has included both in-depth study of the randomness complexity of specific functions (e.g., Couteau and Ros ́en, ASIACRYPT 2022, gives an upper bound of 6 for n-party $\mathsf{AND}$), and results for broad classes of functions (e.g., Kushilevitz et al. STOC 1996, gives an $O(1)$ upper bound for all functions with linear-sized circuits). In this work, we make further progress on both fronts by studying randomness complexity in a new simple model of secure computation called Private Sequential Stateless (PSS) model. We show that functions with $O(1)$ randomness complexity in the PSS model are exactly those with constant-width branching programs, restricting to “speak-constant-times” protocols and to “read-constant-times” branching programs. Towards this our main construction is a novel PSS protocol for “strongly regular branching programs” (SRBP). As we show, any constant-width branching program can be converted to a constant-width SRBP, yielding one side of our characterization. The converse direction uses ideas from Kushilevitz et al. to translate randomness to communication. Our protocols are concretely efficient, has a simple structure, covers the broad class of functions with small-width, read-once (or read-a-few-times) branching programs, and hence may be of practical interest when 1-privacy is considered adequate. Also, as a consequence of our general result for SRBPs, we obtain an improvement over the protocol of Couteau and Ros ́en for $\mathsf{AND}$ in certain cases — not in terms of the number of bits of randomness, but in terms of a simpler protocol structure (sequential, stateless).
Expand
Ling Song, Huimin Liu, Qianqian Yang, Yincen Chen, Lei Hu, Jian Weng
ePrint Report ePrint Report
At Asiacrypt 2022, a holistic key guessing strategy was proposed to yield the most efficient key recovery for the rectangle attack. Recently, at Crypto 2023, a new cryptanalysis technique--the differential meet-in-the-middle (MITM) attack--was introduced. Inspired by these two previous works, we present three generic key recovery attacks in this paper. First, we extend the holistic key guessing strategy from the rectangle to the differential attack, proposing the generic classical differential attack (GCDA). Next, we combine the holistic key guessing strategy with the differential MITM attack, resulting in the generalized differential MITM attack (GDMA). Finally, we apply the MITM technique to the rectangle attack, creating the generic rectangle MITM attack (GRMA). In terms of applications, we improve 12/13-round attacks on AES-256. For 12-round AES-256, by using the GDMA, we reduce the time complexity by a factor of $2^{62}$; by employing the GCDA, we reduce both the time and memory complexities by factors of $2^{61}$ and $2^{56}$, respectively. For 13-round AES-256, we present a new differential attack with data and time complexities of $2^{89}$ and $2^{240}$, where the data complexity is $2^{37}$ times lower than previously published results. These are currently the best attacks on AES-256 using only two related keys. For KATAN-32, we increase the number of rounds covered by the differential attack from 115 to 151 in the single-key setting using the basic differential MITM attack (BDMA) and GDMA. Furthermore, we achieve the first 38-round rectangle attack on SKINNYe-64-256 by using the GRMA.
Expand
Saikrishna Badrinarayanan, Peihan Miao, Xinyi Shi, Max Tromanhauser, Ruida Zeng
ePrint Report ePrint Report
Private set intersection (PSI) allows two mutually distrusting parties each holding a private set of elements, to learn the intersection of their sets without revealing anything beyond the intersection. Recent work (Badrinarayanan et al., PoPETS'22) initiates the study of updatable PSI (UPSI), which allows the two parties to compute PSI on a regular basis with sets that constantly get updated, where both the computation and communication complexity only grow with the size of the small updates and not the large entire sets. However, there are several limitations of their presented protocols. First, they can only be used to compute the plain PSI functionality and do not support extended functionalities such as PSI-Cardinality and PSI-Sum. Second, they only allow parties to add new elements to their existing set and do not support arbitrary deletion of elements. Finally, their addition-only protocols either require both parties to learn the output or only achieve low complexity in an amortized sense and incur linear worst-case complexity.

In this work, we address all the above limitations. In particular, we study UPSI with semi-honest security in both the addition-only and addition-deletion settings. We present new protocols for both settings that support plain PSI as well as extended functionalities including PSI-Cardinality and PSI-Sum, achieving one-sided output (which implies two-sided output). In the addition-only setting, we also present a protocol for a more general functionality Circuit-PSI that outputs secret shares of the intersection. All of our protocols have worst-case computation and communication complexity that only grow with the set updates instead of the entire sets (except for a polylogarithmic factor). We implement our new UPSI protocols and compare with the state-of-the-art protocols for PSI and extended functionalities. Our protocols compare favorably when the total set sizes are sufficiently large, the new updates are sufficiently small, or in networks with low bandwidth.
Expand
Riccardo Taiello, Emre Tosun, Alberto Ibarrondo, Hervé Chabanne, Melek Önen
ePrint Report ePrint Report
Monchi is a new protocol aimed at privacy-preserving biometric identification. It begins with scores computation in the encrypted domain thanks to homomorphic encryption and ends with comparisons of these scores to a given threshold with function secret sharing. We here study the integration in that context of scores computation techniques recently introduced by Bassit et al. that eliminate homomorphic multiplications by replacing them by lookup tables. First, we extend this lookup tables biometric recognition solution by adding the use of function secret sharing for the final comparison of scores. Then, we introduce a two-party computation of the scores with lookup tables which fits nicely together with the function secret sharing scores comparison. Our solutions accommodate well with the flight boarding use case introduced by Monchi.
Expand
Eric Verheul
ePrint Report ePrint Report
We propose a wallet provider issued attestation called Wallet Trust Evidence (WTE) and three related specific instructions for the European Digital Identity (EUDI) Wallet cryptographic hardware, most notably the generation of a Proof of Association (PoA). These allow the EUDI Wallet providing verifiable assurance to third parties (issuers, relying parties) that attestation private keys are not only bound to conformant cryptographic hardware but also that they are bound to the same such hardware. This allows the EUDI Wallet meeting eIDAS Level of Assurance ``high'' as well as operating in a privacy friendly manner. The instructions specified in this document cater for convenient implementation in all envisioned EUDI Wallet architectures including those based on a GlobalPlatform based Secure Element such as an eID-card or an embedded SIM (eSIM). By their simplicity, the three instructions also allow for convenient Common Criteria certification. This document is a further refinement and cryptographic concretization of the WTE/PoA logic specified in the wallet Architecture and Reference Framework (ARF), which is based on the EPIC-09 result developed in a cooperation between the NI-Scy consortium and the eIDAS expert group. However, the present draft document is meant for discussion only and not approved by the NI-Scy consortium, the eIDAS expert group or Dutch government.
Expand
Oğuz Yayla, Yunus Emre Yılmaz
ePrint Report ePrint Report
Physically (Physical) Unclonable Functions (PUFs) are basic and useful primitives in designing cryptographic systems. PUFs are designed to facilitate device authentication, secure boot and firmware integrity, and secure communications. To achieve these objectives, PUFs must exhibit both consistent repeatability and instance-specific randomness. The Arbiter PUF, recognized as the first silicon PUF, is capable of generating a substantial number of secret keys instantaneously based on the input, all while maintaining a lightweight design. This advantageous characteristic makes it particularly well-suited for device authentication in applications with constrained resources, especially for Internet-of-Things (IoT) devices. Despite these advantages, arbiter PUFs are vulnerable to machine learning attacks. Hence, those arbiter PUF designs are improved to achieve increased resistance against such attacks. In this work, a machine-learning-resistant 32-bit and 64-bit component-differentially challenged XOR Arbiter PUF (CDC-XPUF) is implemented based on a design found in the literature. The system is implemented using the ZC702 Rev1.1 Evaluation Board, which features the Xilinx Zynq 7020 SoC, and utilizes a configuration involving three boards for experimental validation. The 32-bit and 64-bit 7-stream CDC-7-XPUFs are evaluated using PUF metrics in the literature, and the utilization ratio of both implementations is also presented. These improvements aim to increase resilience against machine learning attacks while maintaining usefulness and efficiency for IoT applications.
Expand
Oğuz Yayla, Yunus Emre Yılmaz
ePrint Report ePrint Report
Phase-locked loops (PLLs) integrated within field-programmable gate arrays (FPGAs) or System-on-Chip FPGAs (SoCs) represent a promising approach for generating random numbers. Their widespread deployment, isolated functionality within these devices, and robust entropy, as demonstrated in prior studies, position PLL-based true random number generators (PLL-TRNGs) as highly viable solutions for this purpose. This study explicitly examines PLL-TRNG implementations using the ZC702 Rev1.1 evaluation board featuring the Zynq 7020 SoC from Xilinx, utilizing a configuration involving three such boards for experimental validation. Parameters governing the PLL-TRNG are optimized using a backtracking algorithm. Additionally, a novel methodology is proposed to enhance the rate of random data bit generation while preserving entropy characteristics. Performance metrics are rigorously evaluated against the criteria set by the German Federal Office for Information Security (BSI) AIS-20/31 Tests, accompanied by detailed descriptions of the implementation process.
Expand
Nan Wang, Dongxi Liu
ePrint Report ePrint Report
Bit-decomposition-based zero-knowledge range proofs in the discrete logarithm (DLOG) setting with a transparent setup, e.g., Bulletproof (IEEE S\&P \textquotesingle 18), Flashproof (ASIACRYPT \textquotesingle 22), and SwiftRange (IEEE S\&P \textquotesingle 24), have garnered widespread popularity across various privacy-enhancing applications. These proofs aim to prove that a committed value falls within the non-negative range $[0, 2^N-1]$ without revealing it, where $N$ represents the bit length of the range. Despite their prevalence, the current implementations still suffer from suboptimal performance. Some exhibit reduced communication costs at the expense of increased computational costs while others experience the opposite. Presently, users are compelled to utilize these proofs in scenarios demanding stringent requirements for both communication and computation efficiency.

In this paper, we introduce, FlashSwift, a stronger DLOG-based logarithmic-sized alternative. It stands out for its greater shortness and significantly enhanced computational efficiency compared with the cutting-edge logarithmic-sized ones for the most common ranges where $N \leq 64$. It is developed by integrating the techniques from Flashproof and SwiftRange without using a trusted setup. The substantial efficiency gains stem from our dedicated efforts in overcoming the inherent incompatibility barrier between the two techniques. Specifically, when $N=64$, our proof achieves the same size as Bulletproof and exhibits 1.1$\times$ communication efficiency of SwiftRange. More importantly, compared with the two, it achieves $2.3\times$ and $1.65\times$ proving efficiency, and $3.2\times$ and $1.7\times$ verification efficiency, respectively. At the time of writing, our proof also creates two new records of the smallest proof sizes, 289 bytes and 417 bytes, for 8-bit and 16-bit ranges among all the bit-decomposition-based ones without requiring trusted setups. Moreover, to the best of our knowledge, it is the first {\em configurable} range proof that is adaptable to various scenarios with different specifications, where the configurability allows to trade off communication efficiency for computational efficiency. In addition, we offer a bonus feature: FlashSwift supports the aggregation of multiple single proofs for efficiency improvement. Finally, we provide comprehensive performance benchmarks against the state-of-the-art ones to demonstrate its practicality.
Expand
Johann Knechtel, Mohammad Eslami, Peng Zou, Min Wei, Xingyu Tong, Binggang Qiu, Zhijie Cai, Guohao Chen, Benchao Zhu, Jiawei Li, Jun Yu, Jianli Chen, Chun-Wei Chiu, Min-Feng Hsieh, Chia-Hsiu Ou, ...
ePrint Report ePrint Report
Hardware Trojans (HTs) are a longstanding threat to secure computation. Among different threat models, it is the fabrication-time insertion of additional malicious logic directly into the layout of integrated circuits (ICs) that constitutes the most versatile, yet challenging scenario, for both attackers and defenders.

Here, we present a large-scale, first-of-its-kind community effort through red-versus-blue teaming that thoroughly explores this threat. Four independently competing blue teams of 23 IC designers in total had to analyze and fix vulnerabilities of representative IC layouts, whereas a red team of 3 experts in hardware security and IC design continuously pushed the boundaries of these defense efforts through different HTs and novel insertion techniques. Importantly, we find that, despite the blue teams’ commendable efforts, even highly-optimized layouts retained at least some exploitable vulnerabilities.

Our effort follows a real-world setting for a modern 7nm technology node and industry-grade tooling for IC design, all embedded into a fully-automated and extensible benchmarking framework. To ensure the relevance of this work, strict rules that adhere to real-world requirements for IC design and manufacturing were postulated by the organizers. For example, not a single violation for timing and design-rule checks were allowed for defense techniques. Besides, in an advancement over prior art, neither red nor blue teams were allowed to use any so-called fillers and spares for trivial attack or defense approaches.

Finally, we release all methods and artifacts: the representative IC layouts and HTs, the devised attack and defense techniques, the evaluation metrics and setup, the technology setup and commercial-grade reference flow for IC design, the encompassing benchmarking framework, and all best results. This full release enables the community to continue exploring this important challenge for hardware security, in particular to focus on the urgent need for further advancements in defense strategies.
Expand
Suparna Kundu, Quinten Norga, Angshuman Karmakar, Shreya Gangopadhyay, Jose Maria Bermudo Mera, Ingrid Verbauwhede
ePrint Report ePrint Report
Recently, the construction of cryptographic schemes based on hard lattice problems has gained immense popularity. Apart from being quantum resistant, lattice-based cryptography allows a wide range of variations in the underlying hard problem. As cryptographic schemes can work in different environments under different operational constraints such as memory footprint, silicon area, efficiency, power requirement, etc., such variations in the underlying hard problem are very useful for designers to construct different cryptographic schemes. In this work, we explore various design choices of lattice-based cryptography and their impact on performance in the real world. In particular, we propose a suite of key-encapsulation mechanisms based on the learning with rounding problem with a focus on improving different performance aspects of lattice-based cryptography. Our suite consists of three schemes. Our first scheme is Florete, which is designed for efficiency. The second scheme is Espada, which is aimed at improving parallelization, flexibility, and memory footprint. The last scheme is Sable, which can be considered an improved version in terms of key sizes and parameters of the Saber key-encapsulation mechanism, one of the finalists in the National Institute of Standards and Technology's post-quantum standardization procedure. In this work, we have described our design rationale behind each scheme. Further, to demonstrate the justification of our design decisions, we have provided software and hardware implementations. Our results show Florete is faster than most state-of-the-art KEMs on software platforms. For example, the key-generation algorithm of high-security version Florete outperforms the National Institute of Standards and Technology's standard Kyber by $47\%$, the Federal Office for Information Security's standard Frodo by $99\%$, and Saber by $57\%$ on the ARM Cortex-M4 platform. Similarly, in hardware, Florete outperforms Frodo and NTRU Prime for all KEM operations. The scheme Espada requires less memory and area than the implementation of most state-of-the-art schemes. For example, the encapsulation algorithm of high-security version Espada uses $30\%$ less stack memory than Kyber, $57\%$ less stack memory than Frodo, and $67\%$ less stack memory than Saber on the ARM Cortex-M4 platform. The implementations of Sable maintain a trade-off between Florete and Espada regarding software performance and memory requirements. Sable outperforms Saber at least by $6\%$ and Frodo by $99\%$. Through an efficient polynomial multiplier design, which exploits the small secret size, Sable outperforms most state-of-the-art KEMs, including Saber, Frodo, and NTRU Prime. The implementations of Sable that use number theoretic transform-based polynomial multiplication (SableNTT) surpass all the state-of-the-art schemes in performance, which are optimized for speed on the Cortext M4 platform. The performance benefit of SableNTT against Kyber lies in between $7-29\%$, $2-13\%$ for Saber, and around $99\%$ for Frodo.
Expand
Weihao Wang, Shuai Han, Shengli Liu
ePrint Report ePrint Report
Anamorphic encryptions and anamorphic signatures assume a double key pre-shared between two parties so as to enable the transmission of covert messages. How to securely and efficiently distribute a double key under the dictator's surveillance is a central problem for anamorphic cryptography, especially when the users are forced to surrender their long-term secret keys or even the randomness used in the algorithms to the dictator.

In this paper, we propose Anamorphic Authentication Key Exchange (AM-AKE) to solve the problem. Similar to anamorphic encryption, AM-AKE contains a set of anamorphic algorithms besides the normal algorithms. With the help of the anamorphic algorithms in AM-AKE, the initiator and the responder are able to exchange not only a session key but also a double key. We define robustness and security notions for AM-AKE, and also prove some impossibility results on plain AM-AKE whose anamorphic key generation algorithm only outputs a key-pair. To bypass the impossibility results, we work on two sides.

-- On the one side, for plain AM-AKE, the securities have to be relaxed to resist only passive attacks from the dictator. Under this setting, we propose a generic construction of two-pass plain AM-AKE from a two-pass AKE with partially randomness-recoverable algorithms.

-- On the other side, we consider (non-plain) AM-AKE whose key generation algorithm also outputs an auxiliary trapdoor besides the key-pairs. We ask new properties from AKE: its key generation algorithm has secret extractability and other algorithms have separability. Based on such a two-pass AKE, we propose a generic construction of two-pass (non-plain) AM-AKE. The resulting AM-AKE enjoys not only robustness but also the strong security against any dictator knowing both users' secret keys and even the internal randomness of the AKE algorithms and implementing active attacks.

Finally, we present concrete AM-AKE schemes from the popular SIG+KEM paradigm and three-KEM paradigm for constructing AKE.
Expand

14 September 2024

Bangalore, India, 3 March - 6 March 2025
Event Calendar Event Calendar
Event date: 3 March to 6 March 2025
Submission deadline: 15 October 2024
Notification: 15 November 2024
Expand
◄ Previous Next ►