International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

04 September 2024

National Sun Yat-sen University, Department of Computer Science and Engineering; Kaohsiung, Taiwan
Job Posting Job Posting
Candidates for applied cryptography domain must comprehend formal security analysis, secure coding, and effective security integration in the application domains. Candidates for ML/AI domain, must comprehend seach/optimization algorithms, classification, regression and other essential aspects.

Closing date for applications:

Contact: Contact: Arijit Karati (arijit.karati@mail.cse.nsysu.edu.tw)

Expand
University of Surrey, UK
Job Posting Job Posting

A position for a PhD student is available for the Cryptography and Cyber Security Research Group in the Computer Science Research Centre at University of Surrey, UK. The research topic of the PhD program is Post-Quantum Cryptography.

The successful candidate will primarily be working on the following topics (but not limited to): classical and quantum cryptanalysis, cryptographic constructions based on lattice and other hard problems, and efficient algorithms for both attacks and secure implementations. The exact topics could be tailored to the candidate’s background and interests.

This position is based at the internationally renowned Surrey Cyber Security Centre, which regularly publishes in top security venues, and consists of an international, diverse, and inclusive team with expertise in various aspects of security, privacy, and their applications.

Apply through the link https://www.surrey.ac.uk/fees-and-funding/studentships/phd-post-quantum-cryptography

Closing date for applications:

Contact: Chaoyun Li (c.li@surrey.ac.uk)

More information: https://www.surrey.ac.uk/fees-and-funding/studentships/phd-post-quantum-cryptography

Expand

02 September 2024

Christopher Harth-Kitzerow, Yongqin Wang, Rachit Rajat, Georg Carle, Murali Annavaram
ePrint Report ePrint Report
Privacy-Preserving Machine Learning is one of the most relevant use cases for Secure Multiparty Computation (MPC). While private training of large neural networks such as VGG-16 or ResNet-50 on state-of-the-art datasets such as Imagenet is still out of reach, given the performance overhead of MPC, private inference is starting to achieve practical runtimes. However, we show that in contrast to plaintext machine learning, the usage of GPU acceleration for both linear and nonlinear neural network layers is actually counterproductive in PPML and leads to performance and scaling penalties. This can be observed by slow ReLU performance, high GPU memory requirements, and inefficient batch processing in state-of-the-art PPML frameworks, which hinders them from scaling to multiple images per second inference throughput and more than eight images per batch on ImageNet. To overcome these limitations, we propose PIGEON, an open-source framework for Private Inference of Neural Networks. PIGEON utilizes a novel ABG programming model that switches between \underline{A}rithmetic vectorization, \underline{B}itslicing, and \underline{G}PU offloading depending on the MPC-specific computation required by each layer. Compared to the state-of-the-art PPML framework Piranha, PIGEON achieves two orders of magnitude improvements in ReLU throughput, reduces peak GPU memory utilization by one order of magnitude, and scales better with large batch size. This translates to one to two orders of magnitude improvements in throughput for large ImageNet batch sizes (e.g. 192) and more than 70\% saturation of a 25 Gbit/s network.
Expand
Tarun Yadav, Manoj Kumar
ePrint Report ePrint Report
In recent years, ML based differential distinguishers have been explored and compared with the classical methods. Complexity of a key recovery attack on block ciphers is calculated using the probability of a differential distinguisher provided by classical methods. Since theoretical computations suffice to calculate the data complexity in these cases, so there seems no restrictions on the practical availability of computational resources to attack a block cipher using classical methods. However, ML based differential cryptanalysis is based on the machine learning model that uses encrypted data to learn its features using available compute power. This poses a restriction on the accuracy of ML distinguisher for increased number of rounds and ciphers with large block size. Moreover, we can still construct the distinguisher but the accuracy becomes very low in such cases. In this paper, we present a new approach to construct the differential distinguisher with high accuracy using the existing ML based distinguisher of low accuracy. This approach outperforms all existing approaches with similar objective. We demonstrate our method to construct the high accuracy ML based distinguishers for GIFT-128 and ASCON permutation. For GIFT-128, accuracy of 7-round distinguisher is increased to 98.8% with $2^{9}$ data complexity. For ASCON, accuracy of 4-round distinguisher is increased to 99.4% with $2^{18}$ data complexity. We present the first ML based distinguisher for 8 rounds of GIFT-128 using the differential-ML distinguisher presented in Latincrypt-2021. This distinguisher is constructed with 99.8% accuracy and $2^{18}$ data complexity.
Expand
Lorenzo Martinico, Markulf Kohlweiss
ePrint Report ePrint Report
A Trusted Execution Environment (TEE) is a new type of security technology, implemented by CPU manufacturers, which guarantees integrity and confidentiality on a restricted execution environment to any remote verifier. TEEs are deployed on various consumer and commercial hardwareplatforms, and have been widely adopted as a component in the design of cryptographic protocols both theoretical and practical.

Within the provable security community, the use of TEEs as a setup assumption has converged to a standard ideal definition in the Universal Composability setting ($G_\mathsf{att}$, defined by Pass et al., Eurocrypt '17). However, it is unclear whether any real TEE design can actually implement this, or whether the diverse capabilities of today's TEE implementations will in fact converge to a single standard. Therefore, it is necessary for cryptographers and protocol designers to specify what assumptions are necessary for the TEE they are using to support the correctness and security of their protocol.

To this end, this paper provides a more careful treatment of trusted execution than the existing literature, focusing on the capabilities of enclaves and adversaries. Our goal is to provide meaningful patterns for comparing different classes of TEEs , particularly how a weaker TEE functionality can UC-emulate a stronger one given an appropriate mechanism to bridge the two. We introduce a new, ``modular'' definition of TEEsthat captures a broad range of pre-existing functionalities defined in the literature while maintaining their high level of abstraction. While our goal is not directly to model implementations of specific commercial TEE providers, our modular definition provides a way to capture more meaningful and realistic hardware capabilities. We provide a language to characterise TEE capabilities along the following terms: - a set of trusted features available to the enclave; - the set of allowed attacks for malicious interactions with the enclaves; - the contents of attestation signatures. We then define various possible ideal modular $G_\mathsf{att}$ functionality instantiations that capture existing variants in the literature, and provide generic constructions to implement stronger enclave functionalities from an existing setup. Finally, we conclude the paper with a simple example of how to protect against rollback attacks given access to a trusted storage feature.
Expand
Renas Bacho, Benedikt Wagner
ePrint Report ePrint Report
Due to their simplicity, compactness, and algebraic structure, BLS signatures are among the most widely used signatures in practice. For example, used as multi-signatures, they are integral in Ethereum's proof-of-stake consensus. From the perspective of concrete security, however, BLS (multi-)signatures suffer from a security loss linear in the number of signing queries. It is well-known that this loss can not be avoided using current proof techniques.

In this paper, we introduce a new variant of BLS multi-signatures that achieves tight security while remaining fully compatible with regular BLS. In particular, our signatures can be seamlessly combined with regular BLS signatures, resulting in regular BLS signatures. Moreover, it can easily be implemented using existing BLS implementations in a black-box way. Our scheme is also one of the most efficient non-interactive multi-signatures, and in particular more efficient than previous tightly secure schemes. We demonstrate the practical applicability of our scheme by showing how proof-of-stake protocols that currently use BLS can adopt our variant for fully compatible opt-in tight security.
Expand
Jonas Bertels, Quinten Norga, Ingrid Verbauwhede
ePrint Report ePrint Report
Kyber was selected by NIST as a Post-Quantum Cryptography Key Encapsulation Mechanism standard. This means that the industry now needs to transition and adopt these new standards. One of the most demanding operations in Kyber is the modular arithmetic, making it a suitable target for optimization. This work offers a novel modular reduction design with the lowest area on Xilinx FPGA platforms. This novel design, through K-reduction and LUT-based reduction, utilizes 49 LUTs and 1 DSP as opposed to Xing and Li’s 2021 CHES design requiring 90 LUTs and 1 DSP for one modular multiplication. Our design is the smallest modular multiplier reported as of today.
Expand

01 September 2024

National Sun Yat-sen University, Department of Computer Science and Engineering; Kaohsiung, Taiwan
Job Posting Job Posting
Applications are invited for the MS and PhD positions at the Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan. The successful candidate will work under the guidance of Dr. Arijit Karati on the diverse topics in applied cryptology and network security.
  • Candidates for applied cryptography domain must comprehend formal security analysis, secure coding, and effective security integration in the application domains.
  • Candidates for ML/AI domain, must comprehend seach/optimization algorithms, classification, regression and other essential aspects.

    Responsibilities: Apart from academic work, student must involve in several activities in a group or individually, such as (not limited to):
  • Design and implementation of safety protocol.
  • Assesment of the security and performance metric.
  • Research meeting with the supervisor.

    Requirements: (02 MS and 02 PhD positions)
    Apart from the university's basic admission policies (https://cse.nsysu.edu.tw/?Lang=en), students are desired to have following key requirements:
  • Strong motivation on cryptography or AI security.
  • Knowledge of modern technology.
  • Knowledge of basic mathematics.
  • Knowledge of at least two programming languages, such as Python/Java/C/C++.
  • Master's thesis must match respective research fields. (for Phd positions)

    Scholarship:
  • Under the university policy.
  • Project funding (based on availability for MS/Ph.D. students).

    What students can expect:
  • Cooperation from the supervisor and labmates.
  • The rich culture in research and related activities.
  • Flexibility in communication, e.g., English.

    What the supervisor can expect:
    Apart from academic and research works, students are expected to have
  • Good moral character.
  • Hardworking and dedication.

    Deadline for online application: September 30, 2024

    Closing date for applications:

    Contact: Arijit Karati (arijit.karati@mail.cse.nsysu.edu.tw)

    More information: https://www.canseclab.com/

  • Expand

    30 August 2024

    Jungho Moon, Zhanibek Omarov, Donghoon Yoo, Yongdae An, Heewon Chung
    ePrint Report ePrint Report
    Homomorphic encryption is a cryptographic technique that enables arithmetic operations to be performed on encrypted data. However, word-wise fully homomorphic encryption schemes, such as BGV, BFV, and CKKS schemes, only support addition and multiplication operations on ciphertexts. This limitation makes it challenging to perform non-linear operations directly on the encrypted data. To address this issue, prior research has proposed efficient approximation techniques that utilize iterative methods, such as functional composition, to identify optimal polynomials. These approximations are designed to have a low multiplicative depth and a reduced number of multiplications, as these criteria directly impact the performance of the approximated operations.

    In this paper, we propose a novel method, named as adaptive successive over-relaxation (aSOR), to further optimize the approximations used in homomorphic encryption schemes. Our experimental results show that the aSOR method can significantly reduce the computational effort required for these approximations, achieving a reduction of 2–9 times compared to state-of-the-art methodologies. We demonstrate the effectiveness of the aSOR method by applying it to a range of operations, including sign, comparison, ReLU, square root, reciprocal of m-th root, and division. Our findings suggest that the aSOR method can greatly improve the efficiency of homomorphic encryption for performing non-linear operations.
    Expand
    Shiyu Shen, Hao Yang, Wangchen Dai, Hong Zhang, Zhe Liu, Yunlei Zhao
    ePrint Report ePrint Report
    Digital signatures are fundamental building blocks in various protocols to provide integrity and authenticity. The development of the quantum computing has raised concerns about the security guarantees afforded by classical signature schemes. CRYSTALS-Dilithium is an efficient post-quantum digital signature scheme based on lattice cryptography and has been selected as the primary algorithm for standardization by the National Institute of Standards and Technology. In this work, we present a high-throughput GPU implementation of Dilithium. For individual operations, we employ a range of computational and memory optimizations to overcome sequential constraints, reduce memory usage and IO latency, address bank conflicts, and mitigate pipeline stalls. This results in high and balanced compute throughput and memory throughput for each operation. In terms of concurrent task processing, we leverage task-level batching to fully utilize parallelism and implement a memory pool mechanism for rapid memory access. We propose a dynamic task scheduling mechanism to improve multiprocessor occupancy and significantly reduce execution time. Furthermore, we apply asynchronous computing and launch multiple streams to hide data transfer latencies and maximize the computing capabilities of both CPU and GPU. Across all three security levels, our GPU implementation achieves over 160× speedups for signing and over 80× speedups for verification on both commercial and server-grade GPUs. This achieves microsecond-level amortized execution times for each task, offering a high-throughput and quantum-resistant solution suitable for a wide array of applications in real systems.
    Expand
    Anca Nitulescu, Nikitas Paslis, Carla Ràfols
    ePrint Report ePrint Report
    In this work, we consider the setting where one or more users with low computational resources would lie to outsource the task of proof generation for SNARKs to one external entity, named Prover. We study the scenario in which Provers have access to all statements and witnesses to be proven beforehand. We take a different approach to proof aggregation and design a new protocol that reduces simultaneously proving time and communication complexity, without going through recursive proof composition. Our two main contributions: We first design FLIP, a communication efficient folding scheme where we apply the Inner Pairing Product Argument to fold R1CS instances of the same language into a single relaxed R1CS instance. Then, any proof system for relaxed R1CS language can be applied to prove the final instance. As a second contribution, we build a novel variation of Groth16 with the same communication complexity for relaxed R1CS and two extra pairings for verification, with an adapted trusted setup. Compared to SnarkPack - a prior solution addressing scaling for multiple Groth16 proofs - our scheme improves in prover complexity by orders of magnitude, if we consider the total cost to generated the SNARK proofs one by one and the aggregation effort. An immediate application of our solution is Filecoin, a decentralized storage network based on incentives that generates more than 6 million SNARKs for large circuits of 100 million constraints per day.
    Expand
    Sabyasachi Dey, Gregor Leander, Nitin Kumar Sharma
    ePrint Report ePrint Report
    In this paper, we present an improved attack on the stream cipher Salsa20. Our improvements are based on two technical contributions. First, we make use of a distribution of a linear combination of several random variables that are derived from different differentials and explain how to exploit this in order to improve the attack complexity. Secondly, we study and exploit how to choose the actual value for so-called probabilistic neutral bits optimally. Because of the limited influence of these key bits on the computation, in the usual attack approach, these are fixed to a constant value, often zero for simplicity. As we will show, despite the fact that their influence is limited, the constant can be chosen in significantly better ways, and intriguingly, zero is the worst choice. Using this, we propose the first-ever attack on 7.5-round of $128$-bit key version of Salsa20. Also, we provide improvements in the attack against the 8-round of $256$-bit key version of Salsa20 and the 7-round of $128$-bit key version of Salsa20.
    Expand
    Benedikt Wagner, Arantxa Zapico
    ePrint Report ePrint Report
    Data availability sampling allows clients to verify availability of data on a peer-to-peer network provided by an untrusted source. This is achieved without downloading the full data by sampling random positions of the encoded data.

    The long-term vision of the Ethereum community includes a comprehensive data availability protocol using polynomial commitments and tensor codes. As the next step towards this vision, an intermediate solution called PeerDAS is about to integrated, to bridge the way to the full protocol. With PeerDAS soon becoming an integral part of Ethereum's consensus layer, understanding its security guarantees is essential.

    This document aims to describe the cryptography used in PeerDAS in a manner accessible to the cryptographic community, encouraging innovation and improvements, and to explicitly state the security guarantees of PeerDAS.
    Expand
    Christian Badertscher, Fabio Banfi, Jesus Diaz
    ePrint Report ePrint Report
    Self-Sovereign Identity (SSI) empowers individuals and organizations with full control over their data. Decentralized identifiers (DIDs) are at its center, where a DID contains a collection of public keys associated with an entity, and further information to enable entities to engage via secure and private messaging across different platforms. A crucial stepping stone is DIDComm, a cryptographic communication layer that is in production with version 2. Due to its widespread and active deployment, a formal study of DIDComm is highly overdue.

    We present the first formal analysis of DIDComm’s cryptography, and formalize its goal of (sender-) anonymity and authenticity. We follow a composable approach to capture its security over a generic network, formulating the goal of DIDComm as a strong ideal communication resource. We prove that the proposed encryption modes reach the expected level of privacy and authenticity, but leak beyond the leakage induced by an underlying network (captured by a parameterizable resource).

    We further use our formalism to propose enhancements and prove their security: first, we present an optimized algorithm that achieves simultaneously anonymity and authenticity, conforming to the DIDComm message format, and which outperforms the current DIDComm proposal in both ciphertext size and computation time by almost a factor of 2. Second, we present a novel DIDComm mode that fulfills the notion of anonymity preservation, in that it does never leak more than the leakage induced by the network it is executed over. We finally show how to merge this new mode into our improved algorithm, obtaining an efficient all-in-one mode for full anonymity and authenticity.
    Expand
    Biming Zhou, Haodong Jiang, Yunlei Zhao
    ePrint Report ePrint Report
    In the post-quantum migration of TLS 1.3, an ephemeral Diffie-Hellman must be replaced with a post-quantum key encapsulation mechanism (KEM). At EUROCRYPT 2022, Huguenin-Dumittan and Vaudenay [EC:HugVau22] demonstrated that KEMs with standard CPA security are sufficient for the security of the TLS1.3 handshake. However, their result is only proven in the random oracle model (ROM), and as the authors comment, their reduction is very much non-tight and not sufficient to guarantee security in practice due to the $O(q^6)$-loss, where $q$ is the number of adversary’s queries to random oracles. Moreover, in order to analyze the post-quantum security of TLS 1.3 handshake with a KEM, it is necessary to consider the security in the quantum ROM (QROM). Therefore, they leave the tightness improvement of their ROM proof and the QROM proof of such a result as an interesting open question.

    In this paper, we resolve this problem. We improve the ROM proof in [EC:HugVau22] from an $O(q^6)$-loss to an $O(q)$-loss with standard CPA-secure KEMs which can be directly obtained from the underlying public-key encryption (PKE) scheme in CRYSTALS-Kyber. Moreover, we show that if the KEMs are constructed from rigid deterministic public-key encryption (PKE) schemes such as the ones in Classic McElieceand NTRU, this $O(q)$-loss can be further improved to an $O(1)$-loss. Hence, our reductions are sufficient to guarantee security in practice. According to our results, a CPA-secure KEM (which is more concise and efficient than the currently used CCA/1CCA-secure KEM) can be directly employed to construct a post-quantum TLS 1.3. Furthermore, we lift our ROM result into QROM and first prove that the CPA-secure KEMs are also sufficient for the post-quantum TLS 1.3 handshake. In particular, the techniques introduced to improve reduction tightness in this paper may be of independent interest.
    Expand
    Debasmita Chakraborty, Hosein Hadipour, Phuong Hoa Nguyen, Maria Eichlseder
    ePrint Report ePrint Report
    The impossible differential (ID) attack is one of the most important cryptanalytic techniques for block ciphers. There are two phases to finding an ID attack: searching for the distinguisher and building a key recovery upon it. Previous works only focused on automated distinguisher discovery, leaving key recovery as a manual post-processing task, which may lead to a suboptimal final complexity. At EUROCRYPT~2023, Hadipour et al. introduced a unified constraint programming (CP) approach based on satisfiability for finding optimal complete ID attacks in strongly aligned ciphers. While this approach was extended to weakly-aligned designs like PRESENT at ToSC~2024, its application to ARX and AndRX ciphers remained as future work. Moreover, this method only exploited ID distinguishers with direct contradictions at the junction of two deterministic transitions. In contrast, some ID distinguishers, particularly for ARX and AndRX designs, may not be detectable by checking only the existence of direct contradictions.

    This paper fills these gaps by extending Hadipour et al.'s method to handle indirect contradictions and adapting it for ARX and AndRX designs. We also present a similar method for identifying zero-correlation (ZC) distinguishers. Moreover, we extend our new model for finding ID distinguishers to a unified optimization problem that includes both the distinguisher and the key recovery for AndRX designs. Our method improves ID attacks and introduces new distinguishers for several ciphers, such as SIMON, SPECK, Simeck, ChaCha, Chaskey, LEA, and SipHash. For example, we achieve a one-round improvement in the ID attacks against SIMON-64-96, SIMON-64-128, SIMON-128-128, SIMON-128-256 and a two-round improvement in the ID attacks against SIMON-128-192. These results significantly contribute to our understanding of the effectiveness of automated tools in the cryptanalysis of different design paradigms.
    Expand
    Lynn Engelberts, Simona Etinski, Johanna Loyer
    ePrint Report ePrint Report
    Sieving using near-neighbor search techniques is a well-known method in lattice-based cryptanalysis, yielding the current best runtime for the shortest vector problem in both the classical [BDGL16] and quantum [BCSS23] setting. Recently, sieving has also become an important tool in code-based cryptanalysis. Specifically, using a sieving subroutine, [GJN23, DEEK24] presented a variant of the information-set decoding (ISD) framework, which is commonly used for attacking cryptographically relevant instances of the decoding problem. The resulting sieving-based ISD framework yields complexities close to the best-performing classical algorithms for the decoding problem such as [BJMM12, BM18]. It is therefore natural to ask how well quantum versions perform. In this work, we introduce the first quantum algorithms for code sieving by designing quantum variants of the aforementioned sieving subroutine. In particular, using quantum-walk techniques, we provide a speed-up over the best known classical algorithm from [DEEK24] and over a variant using Grover's algorithm [Gro96]. Our quantum-walk algorithm exploits the structure of the underlying search problem by adding a layer of locality-sensitive filtering, inspired by the quantum-walk algorithm for lattice sieving from [CL21]. We complement our asymptotic analysis of the quantum algorithms with numerical results, and observe that our quantum speed-ups for code sieving behave similarly as those observed in lattice sieving. In addition, we show that a natural quantum analog of the sieving-based ISD framework does not provide any speed-up over the first presented quantum ISD algorithm [Ber10]. Our analysis highlights that the framework should be adapted in order to outperform the state-of-the-art of quantum ISD algorithms [KT17, Kir18].
    Expand
    Ori Mazor, Ori Rottenstreich
    ePrint Report ePrint Report
    Cryptocurrencies have gained high popularity in recent years, with over 9000 of them, including major ones such as Bitcoin and Ether. Each cryptocurrency is implemented on one blockchain or over several such networks. Recently, various technologies known as blockchain interoperability have been developed to connect these different blockchains and create an interconnected blockchain ecosystem. This paper aims to provide insights on the blockchain ecosystem and the connection between blockchains that we refer to as the interoperability graph. Our approach is based on the analysis of the correlation between cryptocurrencies implemented over the different blockchains. We examine over 4800 cryptocurrencies implemented on 76 blockchains and their daily prices over a year. This experimental study has potential implications for decentralized finance (DeFi), including portfolio investment strategies and risk management.
    Expand
    Ruiyang Li, Yiteng Sun, Chun Guo, Francois-Xavier Standaert, Weijia Wang, Xiao Wang
    ePrint Report ePrint Report
    Due to the ubiquitous requirements and performance leap in the past decade, it has become feasible to execute garbling and secure computations in settings sensitive to side-channel attacks, including smartphones, IoTs and dedicated hardwares, and the possibilities have been demonstrated by recent works. To maintain security in the presence of a moderate amount of leaked information about internal secrets, we investigate {\it leakage-resilient garbling}. We augment the classical privacy, obliviousness and authenticity notions with leakages of the garbling function, and define their leakage-resilience analogues. We examine popular garbling schemes and unveil additional side-channel weaknesses due to wire label reuse and XOR leakages. We then incorporate the idea of label refreshing into the GLNP garbling scheme of Gueron et al. and propose a variant GLNPLR that provably satisfies our leakage-resilience definitions. Performance comparison indicates that GLNPLR is 60X (using AES-NI) or 5X (without AES-NI) faster than the HalfGates garbling with second order side-channel masking, for garbling AES circuit when the bandwidth is 2Gbps.
    Expand
    Zhikang Xie, Mengling Liu, Haiyang Xue, Man Ho Au, Robert H. Deng, Siu-Ming Yiu
    ePrint Report ePrint Report
    The Paillier cryptosystem is renowned for its applications in electronic voting, threshold ECDSA, multi-party computation, and more, largely due to its additive homomorphism. In these applications, range proofs for the Paillier cryptosystem are crucial for maintaining security, because of the mismatch between the message space in the Paillier system and the operation space in application scenarios.

    In this paper, we present novel range proofs for the Paillier cryptosystem, specifically aimed at optimizing those for both Paillier plaintext and affine operation. We interpret encryptions and affine operations as commitments over integers, as opposed to solely over $\mathbb{Z}_{N}$. Consequently, we propose direct range proof for the updated cryptosystem, thereby eliminating the need for auxiliary integer commitments as required by the current state-of-the-art. Our work yields significant improvements: In the range proof for Paillier plaintext, our approach reduces communication overheads by approximately $60\%$, and computational overheads by $30\%$ and $10\%$ for the prover and verifier, respectively. In the range proof for Paillier affine operation, our method reduces the bandwidth by $70\%$, and computational overheads by $50\%$ and $30\%$ for the prover and verifier, respectively. Furthermore, we demonstrate that our techniques can be utilized to improve the performance of threshold ECDSA and the DCR-based instantiation of the Naor-Yung CCA2 paradigm.
    Expand
    ◄ Previous Next ►