International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

17 January 2024

Darius Mercadier, Viet Sang Nguyen, Matthieu Rivain, Aleksei Udovenko
ePrint Report ePrint Report
Software obfuscation is a powerful tool to protect the intellectual property or secret keys inside programs. Strong software obfuscation is crucial in the context of untrusted execution environments (e.g., subject to malware infection) or to face potentially malicious users trying to reverse-engineer a sensitive program. Unfortunately, the state-of-the-art of pure software-based obfuscation (including white-box cryptography) is either insecure or infeasible in practice.

This work introduces OBSCURE, a versatile framework for practical and cryptographically strong software obfuscation relying on a simple stateless secure element (to be embedded, for example, in a protected hardware chip or a token). Based on the foundational result by Goyal et al. from TCC 2010, our scheme enjoys provable security guarantees, and further focuses on practical aspects, such as efficient execution of the obfuscated programs, while maintaining simplicity of the secure element. In particular, we propose a new rectangular universalization technique, which is also of independent interest. We provide an implementation of OBSCURE taking as input a program source code written in a subset of the C programming language. This ensures usability and a broad range of applications of our framework. We benchmark the obfuscation on simple software programs as well as on cryptographic primitives, hence highlighting the possible use cases of the framework as an alternative to pure software-based white-box implementations.
Expand
Loïc Demange, Mélissa Rossi
ePrint Report ePrint Report
BIKE is a post-quantum key encapsulation mechanism (KEM) selected for the 4th round of the NIST’s standardization campaign. It relies on the hardness of the syndrome decoding problem for quasi-cyclic codes and on the indistinguishability of the public key from a random element, and provides the most competitive performance among round 4 candidates, which makes it relevant for future real-world use cases. Analyzing its side-channel resistance has been highly encouraged by the community and several works have already outlined various side-channel weaknesses and proposed ad-hoc countermeasures. However, in contrast to the well-documented research line on masking lattice-based algorithms, the possibility of generically protecting code-based algorithms by masking has only been marginally studied in a 2016 paper by Cong Chen et al. At this stage of the standardization campaign, it is important to assess the possibility of fully masking BIKE scheme and the resulting cost in terms of performances. In this work, we provide the first high-order masked implementation of a code-based algorithm. We had to tackle many issues such as finding proper ways to handle large sparse polynomials, masking the key-generation algorithm or keeping the benefit of the bitslicing. In this paper, we present all the gadgets necessary to provide a fully masked implementation of BIKE, we discuss our different implementation choices and we propose a full proof of masking in the Ishai Sahai and Wagner (Crypto 2003) model. More practically, we also provide an open C-code masked implementation of the key-generation, encapsulation and decapsulation algorithms with extensive benchmarks. While the obtained performance is slower than existing masked lattice-based algorithms, the scaling in the masking order is still encouraging and no Boolean to Arithmetic conversion has been used. We hope that this work can be a starting point for future analysis and optimization.
Expand
Moumita Dutta, Chaya Ganesh, Neha Jawalkar
ePrint Report ePrint Report
We propose protocols in the Compressed Sigma Protocol framework that achieve a succinct verifier. Towards this, we construct a new inner product argument and cast it in the Compressed Sigma Protocol (CSP) framework as a protocol for opening a committed linear form, achieving logarithmic verification.

We then use our succinct-verifier CSP to construct a zero-knowledge argument for circuit satisfiability (under the discrete logarithm assumption in bilinear groups) in the updatable Structured Reference String (SRS) setting that achieves $O(\log n)$ proof size and $O(\log n)$ verification complexity. Our circuit zero-knowledge protocol has concretely better proof/prover/verifier complexity compared to the the state-of-the-art protocol in the updatable setting under the same assumption. Our techniques of achieving verifier-succinctness in the compression framework is of independent interest.

We then show a commitment scheme for committing to group elements using a structured commitment key. We construct protocols to open a committed homomorphism on a committed vector with verifier succinctness in the designated verifier setting. This has applications in making the verifier in compressed sigma protocols for bilinear group arithmetic circuits, succinct.
Expand
Beyza Bozdemir, Betül Aşkın Özdemir, Melek Önen
ePrint Report ePrint Report
We propose a solution for user privacy-oriented privacy-preserving data aggregation with multiple data customers. Most existing state-of-the-art approaches present too much importance on performance efficiency and seem to ignore privacy properties except for input privacy. Most solutions for data aggregation do not generally discuss the users’ birthright, namely their privacy for their own data control and anonymity when they search for something on the browser or volunteer to participate in a survey. Still, they are ambitious to secure data customers’ rights (which should come later). They focus on resulting in an efficiency-oriented data aggregation enabling input privacy only. We aim to give importance to user privacy, and we have designed a solution for data aggregation in which we keep efficiency in balance. We show that PRIDA provides a good level of computational and communication complexities and is even better in timing evaluation than existing studies published recently (i.e., Bonawitz et al. (CCS’17), Corrigan-Gibbs et al. (NSDI’17), Bell et al. (CCS’20), Addanki et al. (SCN’22)). We employ threshold homomorphic encryption and secure two-party computation to ensure privacy properties. We balance the trade-off between a proper design for users and the desired privacy and efficiency.
Expand
Lipeng He
ePrint Report ePrint Report
Decentralized applications (DApps), which are innovative blockchain-powered software systems designed to serve as the fundamental building blocks for the next generation of Internet services, have witnessed exponential growth in recent years. This paper thoroughly compares and analyzes two blockchain-based decentralized storage networks (DSNs), which are crucial foundations for DApp and blockchain ecosystems. The study examines their respective mechanisms for data persistence, strategies for enforcing data retention, and token economics. In addition to delving into technical details, the suitability of each storage solution for decentralized application development is assessed, taking into consideration network performance, storage costs, and existing use cases. By evaluating these factors, the paper aims to provide insights into the effectiveness of these technologies in supporting the desirable properties of truly decentralized blockchain applications. In conclusion, the findings of this research are discussed and synthesized, offering valuable perspectives on the capabilities of these technologies. It sheds light on their potential to facilitate the development of DApps and provides an understanding of the ongoing trends in blockchain development.
Expand
Dev M. Mehta, Mohammad Hashemi, Domenic Forte, Shahin Tajik, Fatemeh Ganji
ePrint Report ePrint Report
A universal circuit (UC) can be thought of as a programmable circuit that can simulate any circuit up to a certain size by specifying its secret configuration bits. UCs have been incorporated into various applications, such as private function evaluation (PFE). Recently, studies have attempted to formalize the concept of semiconductor intellectual property (IP) protection in the context of UCs. This is despite the observations made in theory and practice that, in reality, the adversary may obtain additional information about the secret when executing cryptographic protocols. This paper aims to answer the question of whether UCs leak information unintentionally, which can be leveraged by the adversary to disclose the configuration bits. In this regard, we propose the first photon emission analysis against UCs relying on computer vision-based approaches. We demonstrate that the adversary can utilize a cost-effective solution to take images to be processed by off-the-shelf algorithms to extract configuration bits. We examine the efficacy of our method in two scenarios: (1) the design is small enough to be captured in a single image during the attack phase, and (2) multiple images should be captured to launch the attack by deploying a divide-and-conquer strategy. To evaluate the effectiveness of our attack, we use metrics commonly applied in side-channel analysis, namely rank and success rate. By doing so, we show that our profiled photon emission analysis achieves a success rate of 1 by employing a few templates (concretely, only 18 images were used as templates).
Expand
Seyedmohammad Nouraniboosjin, Fatemeh Ganji
ePrint Report ePrint Report
The past years have witnessed a considerable increase in research efforts put into neural network-assisted profiled side-channel analysis (SCA). Studies have also identified challenges, e.g., closing the gap between metrics for machine learning (ML) classification and side-channel attack evaluation. In fact, in the context of NN-assisted SCA, the NN’s output distribution forms the basis for successful key recovery. In this respect, related work has covered various aspects of integrating neural networks (NNs) into SCA, including applying a diverse set of NN models, model selection and training, hyperparameter tuning, etc. Nevertheless, one well-known fact has been overlooked in the SCA-related literature, namely NNs’ tendency to become “over-confident,” i.e., suffering from an overly high probability of correctness when predicting the correct class (secret key in the sense of SCA). Temperature scaling is among the powerful and effective techniques that have been devised as a remedy for this. Regarding the principles of deep learning, it is known that temperature scaling does not affect the NN’s accuracy; however, its impact on metrics for secret key recovery, mainly guessing entropy, is worth investigating. This paper reintroduces temperature scaling into SCA and demonstrates that key recovery can become more effective through that. Interestingly, temperature scaling can be easily integrated into SCA, and no re-tuning of the network is needed. In doing so, temperature can be seen as a metric to assess the NN’s performance before launching the attack. In this regard, the impact of hyperparameter tuning, network variance, and capacity have been studied. This leads to recommendations on how network miscalibration and overconfidence can be prevented.
Expand
Tianrun Yu, Chi Cheng, Zilong Yang, Yingchen Wang, Yanbin Pan, Jian Weng
ePrint Report ePrint Report
Number Theoretic Transform (NTT) has been widely used in accelerating computations in lattice-based cryptography. However, attackers can potentially launch power analysis targeting NTT because it is usually the most time-consuming part of the implementation. This extended time frame provides a natural window of opportunity for attackers. In this paper, we investigate the first CPU frequency leakage (Hertzbleed-like) attacks against NTT in lattice-based KEMs. Our key observation is that different inputs to NTT incur different Hamming weights in its output and intermediate layers. By measuring the CPU frequency during the execution of NTT, we propose a simple yet effective attack idea to find the input to NTT that triggers NTT processing data with significantly low Hamming weight. We further apply our attack idea to real-world applications that are built upon NTT: CPA-secure Kyber without Compression and Decompression functions, and CCA-secure NTTRU. This leads us to extract information or frequency Hints about the secret key. Integrating these Hints into the LWE-estimator framework, we estimate a minimum of $35\%$ security loss caused by the leakage. The frequency and timing measurements on the Reference and AVX2 implementations of NTT in both Kyber and NTTRU align well with our theoretical analysis, confirming the existence of frequency side-channel leakage in NTT. It is important to emphasize that our observation is not limited to a specific implementation but rather the algorithm on which NTT is based. Therefore, our results call for more attention to the analysis of power leakage against NTT in lattice-based cryptography.
Expand
Sanjay Deshpande, James Howe, Jakub Szefer, Dongze Yue
ePrint Report ePrint Report
This work presents the first hardware realisation of the Syndrome-Decoding-in-the-Head (SDitH) signature scheme, which is a candidate in the NIST PQC process for standardising post-quantum secure digital signature schemes. SDitH's hardness is based on conservative code-based assumptions, and it uses the Multi-Party-Computation-in-the-Head (MPCitH) construction. This is the first hardware design of a code-based signature scheme based on traditional decoding problems and only the second for MPCitH constructions, after Picnic. This work presents optimised designs to achieve the best area efficiency, which we evaluate using the Time-Area Product (TAP) metric. This work also proposes a novel hardware architecture by dividing the signature generation algorithm into two phases, namely offline and online phases for optimising the overall clock cycle count. The hardware designs for key generation, signature generation, and signature verification are parameterised for all SDitH parameters, including the NIST security levels, both syndrome decoding base fields (GF256 and GF251), and thus conforms to the SDitH specifications. The hardware design further supports secret share splitting, and the hypercube optimisation which can be applied in this and multiple other NIST PQC candidates. The results of this work result in a hardware design with a drastic reducing in clock cycles compared to the optimised AVX2 software implementation, in the range of 2-4x for most operations. Our key generation outperforms software drastically, giving a 11-17x reduction in runtime, despite the significantly faster clock speed. On Artix 7 FPGAs we can perform key generation in 55.1 Kcycles, signature generation in 6.7 Mcycles, and signature verification in 8.6 Mcycles for NIST L1 parameters, which increase for GF251, and for L3 and L5 parameters.
Expand
Fangqi Dong, Zihan Hao, Ethan Mook, Daniel Wichs
ePrint Report ePrint Report
Laconic function evaluation (LFE) is a "flipped" version of fully homomorphic encryption, where the server performing the computation gets the output. The server commits itself to a function $f$ by outputting a small digest. Clients can later efficiently encrypt inputs $x$ with respect to the digest in much less time than computing $f$, and ensure that the server only decrypts $f(x)$, but does not learn anything else about $x$. Prior works constructed LFE for circuits under LWE, and for Turing Machines (TMs) from indistinguishability obfuscation (iO). In this work we introduce LFE for Random-Access Machines (RAM-LFE). The server commits itself to a potentially huge database $y$ via a short digest. Clients can later efficiently encrypt inputs $x$ with respect to the digest and the server decrypts $f(x,y)$ for some specified RAM program $f$ (e.g., a universal RAM), without learning anything else about $x$. The main advantage of RAM-LFE is that the server's decryption run-time only scales with the RAM run-time $T$ of the computation $f(x,y)$, which can be sublinear in both $|x|$ and $|y|$. We consider a weakly efficient variant, where the client's run-time is also allowed to scale linearly with $T$, but not $|y|$, and a fully efficient variant, where the client's run-time must be sublinear in both $T$ and $|y|$. We construct the former from doubly efficient private information retrieval (DEPIR) and laconic OT (LOT), both of which are known from RingLWE, and the latter from an additional use of iO. We then show how to leverage fully efficient RAM-LFE to also get (many-key) functional encryption for RAMs (RAM-FE) where secret keys are associate with big databases $y$ and the decryption time is sublinear in $|y|$, as well as iO for RAMs where the obfuscated program contains a big database $y$ and the evaluation time is sublinear in $|y|$.
Expand
Wenwen Xia, Leizhang Wang, Geng Wang, Dawu Gu, Baocang Wang
ePrint Report ePrint Report
Recently, researchers have proposed many LWE estimators, such as lattice-estimator (Albrecht et al, Asiacrypt 2017) and leaky-LWE-Estimator (Dachman-Soled et al, Crypto 2020), while the latter has already been used in estimating the security level of Kyber and Dilithium using only BKZ. However, we prove in this paper that solving LWE by combining a lattice reduction step (by LLL or BKZ) and a target vector searching step (by enumeration or sieving), which we call a Two-step mode, is more efficient than using only BKZ. Moreover, we give a refined LWE estimator in Two-step mode by analyzing the relationship between the probability distribution of the target vector and the solving success rate in a Two-step mode LWE solving algorithm. While the latest Two-step estimator for LWE, which is the “primal-bdd” mode in lattice-estimator1, does not take into account some up-to-date results and lacks a thorough theoretical analysis. Under the same gate-count model, our estimation for NIST PQC standards drops by 2.1∼3.4 bits (2.2∼4.6 bits while considering more flexible blocksize and jump strategy) compared with leaky-LWE-Estimator. Furthermore, we also give a conservative estimation for LWE from the Two-step solving algorithm. Compared with the Core-SVP model, which is used in previous conservative estimations, our estimation relies on weaker assumptions and outputs higher evaluation results than the Core- SVP model. For NIST PQC standards, our conservative estimation is 4.17∼8.11 bits higher than the Core-SVP estimation. Hence our estimator can give a closer estimation for both upper bound and lower bound of LWE hardness.
Expand
Tolun Tosun, amir moradi, erkay savas
ePrint Report ePrint Report
This paper presents a novel and efficient way of exploiting side-channel leakage of masked implementations of lattice-based cryptography (LBC). The presented attack specifically targets the central reduction technique, which is widely adapted in efficient implementations of LBC. We show that the central reduction leads to a vulnerability by creating a strong dependency between the power consumption and the sign of sensitive intermediate variables. We exploit this dependency by introducing a novel hypothetical power model, the range power model, which can be employed in higher-order multi-query side-channel analysis attacks. We particularly show that our approach is valid for the prime moduli employed by Kyber and Dilithium, the lattice-based post-quantum algorithms selected by NIST, while it generalizes to other primes used in LBC as well. We practically evaluate our introduced approach by performing second-order non-profiled attacks against a masked implementation of Kyber on an Arm Cortex-M4 micro-processor. In our experiments we revealed the full secret key of the aforementioned implementation with only 2100 electro-magnetic (EM) traces without profiling, achieving a more than 14 times reduction in the number of traces compared to classical attacks.
Expand
Marie Beth van Egmond, Vincent Dunning, Stefan van den Berg, Thomas Rooijakkers, Alex Sangers, Ton Poppe, Jan Veldsink
ePrint Report ePrint Report
Money laundering is a serious financial crime where criminals aim to conceal the illegal source of their money via a series of transactions. Although banks have an obligation to monitor transactions, it is difficult to track these illicit money flows since they typically span over multiple banks, which cannot share this information due to privacy concerns. We present secure risk propagation, a novel efficient algorithm for money laundering detection across banks without violating privacy concerns. In this algorithm, each account is assigned a risk score, which is then propagated through the transaction network. In this article we present two results. Firstly, using data from a large Dutch bank, we show that it is possible to detect unusual activity using this model, with cash ratio as the risk score. With a recall of 20%, the precision improves from 15% to 40% by propagating the risk scores, reducing the number of false positives significantly. Secondly, we present a privacy-preserving solution for securely performing risk propagation over a joint, inter-bank transaction network. To achieve this, we use Secure Multi-Party Computation (MPC) techniques, which are particularly well-suited for the risk propagation algorithm due to its structural simplicity. We also show that the running time of this secure variant scales linearly in the amount of accounts and transactions. For 200, 000 transactions, two iterations of the secure algorithm between three virtual parties, run within three hours on a consumer-grade server.
Expand
Pierrick Méaux, Qingju Wang
ePrint Report ePrint Report
When designing filter functions in Linear Feedback Shift Registers (LFSR) based stream ciphers, algebraic criteria of Boolean functions such as the Algebraic Immunity (AI) become key characteristics because they guarantee the security of ciphers against the powerful algebraic attacks. In this article, we investigate a generalization of the algebraic attacks proposed by Courtois and Meier on filtered LFSR twenty years ago. We consider how the standard algebraic attack can be generalized beyond filtered LFSR to stream ciphers applying a Boolean filter function to an updated state. Depending on the updating process, we can use different sets of annihilators than the ones used in the standard algebraic attack; it leads to a generalization of the concept of algebraic immunity, and more efficient attacks. To illustrate these strategies, we focus on one of these generalizations and introduce a new notion called Extreme Algebraic Immunity (EAI).

We perform a theoretic study of the EAI criterion and explore its relation to other algebraic criteria. We prove the upper bound of the EAI of an n-variable Boolean function and further show that the EAI can be lower bounded by the AI restricted to a subset, as defined by Carlet, Méaux and Rotella at FSE 2017. We also exhibit functions with EAI guaranteed to be lower than the AI, in particular we highlight a pathological case of functions with optimal algebraic immunity and EAI only n/4. As applications, we determine the EAI of filter functions of some existing stream ciphers and discuss how extreme algebraic attacks using EAI could apply to some ciphers.

Our generalized algebraic attack does not give a better complexity than Courtois and Meier's result on the existing stream ciphers. However, we see this work as a study to avoid weaknesses in the construction of future stream cipher designs.
Expand
Julien Maillard, Thomas Hiscock, Maxime Lecomte, Christophe Clavier
ePrint Report ePrint Report
Hashing algorithms are one-way functions that are used in cryptographic protocols as Pseudo Random Functions (PRF), to assess data integrity or to create a Hash-based Message Authentication Code (HMAC). In many cryptographic constructions, secret data is processed with hashing functions. In these cases, recovering the input given to the hashing algorithm allows retrieving secret data. In this paper, we investigate the application of Soft Analytical Side-Channel Attacks (SASCA), based on a Belief Propagation (BP) framework, to recover the input of two popular hash function families: SHA-2 and SHA-3. Thanks to a simulation framework, we develop a comprehensive study of the attacker's recovery capacity depending on the hash function variant. Then, we demonstrate that an attacker can leverage prior knowledge on the hashing function input to increase the effectiveness of the attacks. As an example, in the context of a bootloader doing a hash-based integrity check on a secret firmware, we show that simple statistics on assembly code injected in BP improves input recovery. Finally, we study the security implications of SASCA on cryptosystems performing multiple invocations of hashing functions with inputs derived from the same secret data. We show that such constructions can be exploited efficiently by an attacker. We support such statements with experiments on SHA-256 based HMAC and on SHAKE-256 based PRF in Kyber's encryption routine. We also show that increasing Kyber's security parameters implies weaker security against the proposed SASCA targeting the shared key.
Expand
WenBin Hsieh
ePrint Report ePrint Report
In 2016, NIST announced an open competition with the goal of finding and standardizing a suitable quantum-resistant cryptographic algorithm, with the standard to be drafted in 2023. These algorithms aim to implement post-quantum secure key encapsulation mechanism (KEM) and digital signatures. However, the proposed algorithm does not consider authentication and is vulnerable to attacks such as man-in-the-middle. In this paper, we propose an authenticated key exchange algorithm to solve the above problems and improve its usability. The proposed algorithm combines learning with errors (LWE) and elliptic curve discrete logarithm problem to provide the required security goals. As forward security is a desirable property in a key exchange protocol, an ephemeral key pair is designed that a long-term secret compromise does not affect the security of past session keys. Moreover, the exchange steps required by the algorithm are very streamlined and can be completed with only two handshakes. We also use the random oracle model to prove the correctness and the security of proposed scheme. The performance analysis demonstrates the effectiveness of the proposed scheme. We believe that the novel approach introduced in this algorithm opens several doors for innovative applications of digital signatures in KEMs.
Expand
Mengce Zheng
ePrint Report ePrint Report
In this paper, we focus on the common prime RSA variant and introduces a novel investigation into the partial key exposure attack targeting it. We explore the vulnerability of this RSA variant, which employs two common primes $p$ and $q$ defined as $p=2ga+1$ and $q=2gb+1$ for a large prime $g$. Previous cryptanalysis of common prime RSA has primarily focused on the small private key attack. In our work, we delve deeper into the realm of partial key exposure attacks by categorizing them into three distinct cases. We are able to identify weak private keys that are susceptible to partial key exposure by using the lattice-based method for solving simultaneous modular univariate linear equations. To validate the effectiveness and soundness of our proposed attacks, we conduct experimental evaluations. Through these examinations, we demonstrate the validity and practicality of the proposed partial key exposure attacks on common prime RSA.
Expand
Julius Hermelink, Kai-Chun Ning, Emanuele Strieder
ePrint Report ePrint Report
NIST has released the draft standard for ML-KEM, and ML-KEM is actively used in several widely-distributed applications. Thus, the wide-spread use of ML-KEM in the embedded worlds has to be expected in the near future. This makes security against side-channel attacks a pressing matter.

Several side-channel attacks have previously been proposed, and one line of research have been attacks against the comparison step of the FO-transform. These attacks construct a decryption failure oracle using a side-channel. A recent work published at TCHES 2022 stresses the need for higher-order masked comparisons by presenting a horizontal attack and proposes a t-probing secure comparison operation. A subsequent work by D’Anvers, Van Beirendonck, and Verbauwhede improves upon the performance of several previous proposals.

In this work, we show that the latter masked comparison suffers from weakness similar to those identified in the former. We first propose an approximate template attack that requires only a very low number of traces for profiling and has an exceptionally high noise tolerance. We show that the profiling phase is not necessary and can be replaced by a vertical analysis of the distribution of certain points of interest without knowledge of the targeted values. Finally, we explain how a horizontal attack may construct a decryption failure oracle from a single trace.

We provide a leakage model of the targeted operations, which is based on the noisy Hamming weight model. Our evaluations are carried out on a physical device to stress the practicality of our attack. In addition, we simulate the attacks to determine the measurement noise levels that can be handled. We discuss the underlying causes for our attack, the difficulty of securing the Fujisaki-Okamoto transform in ML-KEM, and draw conclusion about the (in-)sufficiency of t-probing security in this context.
Expand
Oren Ganon, Itamar Levi
ePrint Report ePrint Report
The selection of a Lightweight Cryptography (LWC) algorithm is crucial for resource limited applications. The National Institute of Standards and Technology (NIST) leads this process, which involves a thorough evaluation of the algorithms’ cryptanalytic strength. Furthermore, careful consideration is given to factors such as algorithm latency, code size, and hardware implementation area. These factors are critical in determining the overall performance of cryptographic solutions at edge devices. Introducing CrISA-X, a Cryptography Instruction Set Architecture extensions designed to improve cryptographic latency on extendable processors. CrISA-X, classified as Generic-Atomic, Block-Specific and Procedure-Specific, leverages RISC processor hardware and a base ISA to effectively execute LWC algorithms. Our study aims to evaluate the execution efficiency of new single-cycle instruction extensions and tightly coupled multicycle instructions on extendable modular RISC processors. CrISA-X provides enhanced speed of various algorithms simultaneously while optimizing ISA adaptability, a feat yet to be accomplished. The extension, diverse for several computation levels, is first specifically tailored for individual algorithms and sets of LWC algorithms, depending on performance, frequency, and area trade-offs. By diligently applying the Min-Max optimization technique, we have configured these extensions to achieve a delicate balance between performance, area code size, etc. Our study presents empirical evidence of the performance enhancement achieved on a real synthesis modular RISC processor. We offer a framework for creating optimized processor hardware and ISA extensions. The CrISA-X framework generally outperforms ISA extensions by delivering significant performance boosts between 3x to 17x while experiencing a relative area cost increase of +12% and +47% in LUTs, in respect to the instruction set category. Notably, as one important example, the utilization of the ASCON algorithm yields a 10x performance boost in contrast to the base ISA instruction implementation
Expand
Sacha Servan-Schreiber
ePrint Report ePrint Report
In this paper, we build a framework for constructing Constrained Pseudorandom Functions (CPRFs) with inner-product constraint predicates, using ideas from subtractive secret sharing and related-key-attack security.

Our framework can be instantiated using a random oracle or any suitable Related-Key-Attack (RKA) secure pseudorandom function. We provide three instantiations of our framework:

1. an adaptively-secure construction in the random oracle model; 2. a selectively-secure construction under the DDH assumption; and 3. a selectively-secure construction under the assumption that one-way functions exist.

All three instantiations are constraint-hiding and support inner-product predicates, leading to the first constructions of such expressive CPRFs under each corresponding assumption. Moreover, while the OWF-based construction is primarily of theoretical interest, the random oracle and DDH-based constructions are concretely efficient, which we show via an implementation.
Expand
◄ Previous Next ►