IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
23 August 2024
Archisman Ghosh, Dong-Hyun Seo, Debayan Das, Santosh Ghosh, Shreyas Sen
ePrint Report
Side-channel attacks (SCAs) remain a significant threat to the security of cryptographic systems in modern embedded devices. Even mathematically secure cryptographic algorithms, when implemented in hardware, inadvertently leak information through physical side-channel signatures such as power consumption, electromagnetic (EM) radiation, light emissions, and acoustic emanations. Exploiting these side channels significantly reduces the attacker’s search space.
In recent years, physical countermeasures have significantly increased the minimum traces-to-disclosure (MTD) to 1 billion. Among them, signature attenuation is the first method to achieve this mark. Signature attenuation often relies on analog techniques, and digital signature attenuation reduces MTD to 20 million, requiring additional methods for high resilience. We focus on improving the digital signature attenuation by an order of magnitude (MTD 200M).
Ryan Seah, Daren Khu, Alexander Hoover, Ruth Ng
ePrint Report
Always Encrypted (AE) is a Microsoft SQL Server feature that allows clients to encrypt sensitive data inside client applications and ensures that the sensitive data is hidden from untrusted servers and database administrators. AE offers two column-encryption options: deterministic encryption (DET) and randomized encryption (RND). In this demo, we explore the security implications of using AE with both DET and RND encryption modes by running Leakage Abuse Attacks (LAAs) against the system. We demonstrate how an adversary could extract the necessary data to run a frequency analysis LAA against DET-encrypted columns and an LAA for Order-Revealing Encryption against RND-encrypted columns. We run our attacks using real-world datasets encrypted in a full-scale AE instance and demonstrate that a snooping server can recover over 95% of the rows in 8 out of 15 DET-encrypted columns, and 10 out of 15 RND-encrypted columns.
Hiroki Okada, Rachel Player, Simon Pohmann, Christian Weinert
ePrint Report
The Doubly-Efficient Private Information Retrieval (DEPIR) protocol of Lin, Mook, and Wichs (STOC'23) relies on a Homomorphic Encryption (HE) scheme that is algebraic, i.e., whose ciphertext space has a ring structure that matches the homomorphic operations. While early HE schemes had this property, modern schemes introduced techniques to manage noise growth. This made the resulting schemes much more efficient, but also destroyed the algebraic property. In this work, we study algebraic HE with the goal of improving its performance and thereby also the performance of DEPIR
We first prove a lower bound of $2^{\Omega(2^d)}$ for the ciphertext ring size of algebraic HE schemes that can evaluate a circuit of multiplicative depth $d$, thus demonstrating a gap between optimal algebraic HE and the existing schemes, which have a ciphertext ring size of $2^{O(2^{2d})}$. As we are unable to bridge this gap directly, we instead slightly relax the notion of being algebraic. This allows us to construct a practically more efficient relaxed-algebraic HE scheme. We then show that this also leads to a more efficient instantiation and implementation of DEPIR. We experimentally demonstrate run-time improvements of more than 4x and reduce memory queries by more than 8x compared to prior work.
Notably, our relaxed-algebraic HE scheme relies on a new variant of the Ring Learning with Errors (RLWE) problem that we call $\{0, 1\}$-CRT RLWE. We give a formal security reduction to standard RLWE, and estimate its concrete security. Both the $\{0, 1\}$-CRT RLWE problem and the techniques used for the reduction may be of independent interest.
We first prove a lower bound of $2^{\Omega(2^d)}$ for the ciphertext ring size of algebraic HE schemes that can evaluate a circuit of multiplicative depth $d$, thus demonstrating a gap between optimal algebraic HE and the existing schemes, which have a ciphertext ring size of $2^{O(2^{2d})}$. As we are unable to bridge this gap directly, we instead slightly relax the notion of being algebraic. This allows us to construct a practically more efficient relaxed-algebraic HE scheme. We then show that this also leads to a more efficient instantiation and implementation of DEPIR. We experimentally demonstrate run-time improvements of more than 4x and reduce memory queries by more than 8x compared to prior work.
Notably, our relaxed-algebraic HE scheme relies on a new variant of the Ring Learning with Errors (RLWE) problem that we call $\{0, 1\}$-CRT RLWE. We give a formal security reduction to standard RLWE, and estimate its concrete security. Both the $\{0, 1\}$-CRT RLWE problem and the techniques used for the reduction may be of independent interest.
Anyu Wang, Zhongxiang Zheng, Chunhuan Zhao, Zhiyuan Qiu, Guang Zeng, Xiaoyun Wang
ePrint Report
We propose Scloud+, a lattice-based key encapsulation mechanism (KEM) scheme.
The design of Scloud+ is informed by the following two aspects.
Firstly, Scloud+ is based on the hardness of algebraic-structure-free lattice problems, which avoids potential attacks brought by the algebraic structures.
Secondly, Scloud+ provides sets of light weight parameters, which greatly reduce the complexity of computation and communication complexity while maintaining the required level of security.
Claude Carlet, Palash Sarkar
ePrint Report
We describe two new classes of functions which provide the presently best known trade-offs between low computational complexity, nonlinearity and (fast) algebraic immunity. The nonlinearity and (fast) algebraic immunity of the new functions substantially improve upon those properties of all previously known efficiently implementable functions. Appropriately chosen functions from the two new classes provide excellent solutions to the problem of designing filtering functions for use in the nonlinear filter model of stream ciphers, or in any other stream ciphers using Boolean functions for ensuring confusion. In particular, for $n\leq 20$, we show that there are functions in our first family whose implementation efficiences are significantly lower than all previously known functions achieving a comparable combination of nonlinearity and (fast) algebraic immunity. Given positive integers $\ell$ and $\delta$, it is possible to choose a function from our second family whose linear bias is provably at most $2^{-\ell}$, fast algebraic immunity is at least $\delta$ (based on conjecture which is well supported by experimental results), and which can be implemented in time and space which is linear in $\ell$ and $\delta$. Further, the functions in our second family are built using homomorphic friendly operations, making these functions well suited for the application of transciphering.
Xingwei Ren, Yongqiang Li, Mingsheng Wang
ePrint Report
Recently, Picnic3 has introduced several alternative LowMC instances, which prompts the cryptanalysis competition for LowMC. In this paper, we provide new solutions to the competition with full S-box layers under single-data complexity. First, we present a new guess-and-determine attack framework that achieves the best trade-off in complexity, while effectively enhancing two algorithms applicable to 2-round LowMC cryptanalysis. Next, we present a new meet-in-the-middle attack framework for 2-/3-round LowMC, which can gradually reduce the number of variables and narrow down the range of candidate keys in stages. As a result, our 3-stage MITM attacks have both lower time complexity and memory complexity than the best previous 2-round attacks proposed by Banik et al. at ASIACRYPT 2021, with memory reduced drastically by a factor of $ 2^{29.7} \sim 2^{70.4} $.
Borui GONG, Wang Fat Lau, Man Ho Au, Rupeng Yang, Haiyang Xue, Lichun Li
ePrint Report
We present an efficient zero-knowledge argument of knowledge system customized for the Paillier cryptosystem. Our system enjoys sublinear proof size, low verification cost, and acceptable proof generation effort, while also supporting batch proof generation/verification. Existing works specialized for Paillier cryptosystem feature linear proof size and verification time. Using existing sublinear argument systems for generic statements (e.g., zk-SNARK) results in unaffordable proof generation cost since it involves translating the relations to be proven into an inhibitive large Boolean or arithmetic circuit over a prime order field. Our system does not suffer from these limitations.
The core of our argument systems is a constraint system defined over the ring of residue classes modulo a composite number, together with novel techniques tailored for arguing binary values in this setting. We then adapt the approach from Bootle et al. (EUROCRYPT 2016) to compile the constraint system into a sublinear argument system. Our constraint system is generic and can be used to express typical relations in Paillier cryptosystems including range proof, correctness proof, relationships between bits of plaintext, relationships of plaintexts among multiple ciphertexts, and more. Our argument supports batch proof generation and verification, with the amortized cost outperforming state-of-the-art protocol specialized for Paillier when the number of Paillier ciphertext is in the order of hundreds.
We report an end-to-end prototype and conduct comprehensive experiments across multiple scenarios. Scenario 1 is Paillier with packing. When we pack 25.6K bits into 400 ciphertexts, a proof that all these ciphertexts are correctly computed is 17 times smaller and is 3 times faster to verify compared with the naive implementation: using 25.6K OR-proofs without packing. Furthermore, we can prove additional statements almost for free, e.g., one can prove that the sum of a subset of the witness bits is less than a threshold t. Another scenario is range proof. To prove that each plaintext in 200 Paillier ciphertexts is of size 256 bits, our proof size is 10 times smaller than the state-of-the-art. Our analysis suggests that our system is asymptotically more efficient than existing protocols, and is highly suitable for scenarios involving a large number (more than 100) of Paillier ciphertexts, which is often the case for data analytics applications.
The core of our argument systems is a constraint system defined over the ring of residue classes modulo a composite number, together with novel techniques tailored for arguing binary values in this setting. We then adapt the approach from Bootle et al. (EUROCRYPT 2016) to compile the constraint system into a sublinear argument system. Our constraint system is generic and can be used to express typical relations in Paillier cryptosystems including range proof, correctness proof, relationships between bits of plaintext, relationships of plaintexts among multiple ciphertexts, and more. Our argument supports batch proof generation and verification, with the amortized cost outperforming state-of-the-art protocol specialized for Paillier when the number of Paillier ciphertext is in the order of hundreds.
We report an end-to-end prototype and conduct comprehensive experiments across multiple scenarios. Scenario 1 is Paillier with packing. When we pack 25.6K bits into 400 ciphertexts, a proof that all these ciphertexts are correctly computed is 17 times smaller and is 3 times faster to verify compared with the naive implementation: using 25.6K OR-proofs without packing. Furthermore, we can prove additional statements almost for free, e.g., one can prove that the sum of a subset of the witness bits is less than a threshold t. Another scenario is range proof. To prove that each plaintext in 200 Paillier ciphertexts is of size 256 bits, our proof size is 10 times smaller than the state-of-the-art. Our analysis suggests that our system is asymptotically more efficient than existing protocols, and is highly suitable for scenarios involving a large number (more than 100) of Paillier ciphertexts, which is often the case for data analytics applications.
Yibo Cao, Shiyuan Xu, Xiu-Bo Chen, Siu-Ming Yiu
ePrint Report
With the widespread development of cloud storage, searching over the encrypted data (without decryption) has become a crucial issue. Public key authenticated encryption with keyword search (PAEKS) retrieves encrypted data, and resists inside keyword guessing attacks (IKGAs). Most PAEKS schemes cannot support access control in multi-receiver models. To address this concern, attribute-based authenticated encryption with keyword search (ABAEKS) has been studied. However, the access privilege for the ciphertext may change, and the conventional cryptographic primitives are not resistant to quantum computing attacks, which exhibits a limited applicability and poor security for cloud storage. In this paper, we propose RABAEKS, the first post-quantum revocable attribute-based authenticated encrypted search scheme for multi-receiver cloud storage. Our design enables cloud server enforces the access control of data receivers in the search process. For practical consideration, we further introduce a revocation mechanism of data receivers, which makes the access control more dynamic. We then define and rigorously analyze the security our scheme. Through the performance evaluations and comparisons, our computational overhead of ciphertext generation, trapdoor generation and search algorithm are at least 20×, 1.67× and 1897× faster than prior arts, respectively, which is practical for cloud storage.
Chang Chen, Zelong Wu, Guoyu Yang, Qi Chen, Wei Wang, Jin Li
ePrint Report
Clinical trials are crucial in the development of new medical treatment methods. To ensure the correctness of clinical trial results, medical institutes need to collect and process large volumes of participant data, which has prompted research on privacy preservation and data reliability. However, existing solutions struggle to resolve the trade-off between them due to the trust gap between the physical and digital worlds, limiting their practicality. To tackle the issues above, we present Kalos, a novel authentication scheme for clinical trials. Kalos leverages diversified cryptographic tools, such as card-based anonymous credential and zero-knowledge proof to achieve authentication with visual verification and selective disclosure of attributes. It has properties such as unforgeability, blindness, privacy preservation, and human-binding that support hierarchical auditability and data de-duplication to enhance the reliability of clinical trials. We then provide the security and performance analysis of Kalos to show its potential to be deployed in the medical consumer electronics scenario. The computational cost of the smartcard is irrespective of the number of certified attributes, and the total computational cost of Kalos is within tens of milliseconds with the commonly used number of attributes.
David Gerault, Anna Hambitzer, Moritz Huppert, Stjepan Picek
ePrint Report
At CRYPTO 2019, A. Gohr introduced Neural Differential Cryptanalysis by applying deep learning to modern block cipher cryptanalysis. Surprisingly, the resulting neural differential distinguishers enabled a new state-of-the-art key recovery complexity for 11 rounds of SPECK32. As of May 2024, according to Google Scholar, Gohr’s article has been cited 178 times. The wide variety of targets, techniques, settings, and evaluation methodologies that appear in these follow-up works grants a careful systematization of knowledge, which we provide in this paper. More specifically, we propose a taxonomy of these 178 publications and focus on the 50 that deal with differential neural distinguishers to systematically review and compare them. We then discuss two challenges for the field, namely comparability of neural distinguishers and
scaling.
20 August 2024
Ben Fisch, Arthur Lazzaretti, Zeyu Liu, Lei Yang
ePrint Report
Rollups are special applications on distributed state machines (aka blockchains) for which the underlying state machine only logs, but does not execute transactions. Rollups have become a popular way to scale applications on Ethereum and there is now growing interest in running rollups on Bitcoin. Rollups scale throughput and reduce transaction costs by using auxiliary machines that have higher throughput and lower cost of executing transactions than the underlying blockchain. State updates are periodically posted to the underlying blockchain and either verified directly through succinct cryptographic proofs (zk rollups) or can be challenged for a defined period of time in a verifiable way by third parties (optimistic rollups).
However, once computation is removed as a bottleneck, communication quickly becomes the new bottleneck. The critical service the underlying blockchain provides in addition to verification is data availability: that necessary data can always be recovered upon request. While broadcasting transaction data is one way to ensure this, it requires communication blowup linear in the number of participating nodes. Verifiable information dispersal (VID) systems achieve sublinear blowup in the same participation model and the same security assumptions as Ethereum, where all nodes have a strong public-key identity. It was not known how to do so in the same permissionless model as Bitcoin, where participants are unauthenticated and participation is dynamic.
We construct a VID system that is secure under the same model as Bitcoin, with one minimal additional requirement on the existence of reliable participants. Our system uses a state machine replication (SMR) protocol (e.g., Bitcoin) as a black box, and is therefore backward compatible. We implemented the system on top of Bitcoin core with the Regression Test Network (regtest), and our analysis shows that it reduces communication costs by more than 1,000x and latency by more than 10x.
Dmitrii Koshelev
ePrint Report
This article addresses the issue of efficient and safe (de)compression of $\mathbb{F}_{\!q}$-points on an elliptic curve $E$ over a highly $2$-adic finite field $\mathbb{F}_{\!q}$ of characteristic $5$ or greater. The given issue was overlooked by cryptography experts, probably because, until recently, such fields were not in trend. Therefore, there was no difficulty (with rare exceptions) in finding a square $\mathbb{F}_{\!q}$-root. However, in our days, fields with large $2$-adicities have gained particular popularity in the ZK (zero-knowledge) community, despite the fact that $\sqrt{\cdot} \in \mathbb{F}_{\!q}$ should be computed via more sophisticated square-root algorithms such as (Cipolla-Lehmer-)Müller's one. The article explains why the classical $x$-coordinate (de)compression method based on Müller's algorithm often contains Achilles' heel to successfully perform a novel fault attack, which also fits the definition of a (D)DoS attack. In a nutshell, the trouble stems from the non-deterministic initialization of Müller's algorithm.
Moreover, the article suggests a countermeasure, namely an alternative (still simple) (de)compression method that completely prevents the discovered attack whenever the curve $E/\mathbb{F}_{\!q}$ is of even order. In particular, all twisted Edwards (i.e., Montgomery) curves are relevant. The decompression stage of the new method equally suffers from one square-root extraction in $\mathbb{F}_{\!q}$. But the corresponding quadratic residue is inherently equipped with additional information, providing an opportunity to launch Müller's algorithm immediately from its main deterministic part. In turn, the compression stage of the new method remains (almost) free as well as for the $x$-coordinate method.
Moreover, the article suggests a countermeasure, namely an alternative (still simple) (de)compression method that completely prevents the discovered attack whenever the curve $E/\mathbb{F}_{\!q}$ is of even order. In particular, all twisted Edwards (i.e., Montgomery) curves are relevant. The decompression stage of the new method equally suffers from one square-root extraction in $\mathbb{F}_{\!q}$. But the corresponding quadratic residue is inherently equipped with additional information, providing an opportunity to launch Müller's algorithm immediately from its main deterministic part. In turn, the compression stage of the new method remains (almost) free as well as for the $x$-coordinate method.
Ward Beullens
ePrint Report
SNOVA is a multivariate signature scheme submitted to the NIST project for additional signature schemes by Cho, Ding, Kuan, Li, Tseng, Tseng, and Wang. With small key and signature sizes good performance, SNOVA is one of the more efficient schemes in the competition, which makes SNOVA an important target for cryptanalysis.
In this paper, we observe that SNOVA implicitly uses a structured version of the ``whipping'' technique developed for the MAYO signature scheme. We show that the extra structure makes the construction vulnerable to new forgery attacks. Concretely, we formulate new attacks that reduce the security margin of the proposed SNOVA parameter sets by a factor between $2^{8}$ and $2^{39}$. Furthermore, we show that large fractions of public keys are vulnerable to more efficient versions of our attack. For example, for SNOVA-37-17-2, a parameter set targeting NIST's first security level, we show that roughly one out of every $500$ public keys is vulnerable to a universal forgery attack with bit complexity $2^{97}$, and roughly one out of every $143000$ public keys is even breakable in practice within a few minutes.
Michele Ciampi, Aggelos Kiayias, Yu Shen
ePrint Report
Order fairness in the context of distributed ledgers has received recently significant attention due to a range of attacks that exploit the reordering and adaptive injection of transactions (violating what is known as “input causality”). To address such concerns an array of definitions for order fairness has been put forth together with impossibility and feasibility results highlighting the difficulty and multifaceted nature of fairness in transaction serialization. Motivated by this we present a comprehensive modeling of order fairness capitalizing on the universal composition (UC) setting. Our results capture the different flavors of sender order fairness and input causality (which is arguably one of the most critical aspects of ledger transaction processing with respect to serialization attacks) and we parametrically illustrate what are the limits of feasibility for realistic constructions via an impossibility result. Our positive result, a novel distributed ledger protocol utilizing trusted enclaves, complements tightly our impossibility result, hence providing an optimal sender order fairness ledger construction that is also eminently practical.
Weidan Ji, Zhedong Wang, Haoxiang Jin, Qi Wang, Geng Wang, Dawu Gu
ePrint Report
Lattice-based identity-based encryption having both efficiency and provable security in the standard model is currently still a challenging task and has drawn much attention. In this work, we introduce a new IBE construction from NTRU lattices in the standard model, based on the framework proposed by Agrawal, Boneh, and Boyen (EUROCRYPT 2010). Particularly, by introducing the NTRU trapdoor and the RingLWE computational assumption, we remove a crux restriction of the column number and obtain a more compact IBE construction in the standard model. Besides, we provide a concrete implementation and detailed performance results with a comparison of previous works in terms of the security model and the assumption, which demonstrates the advantage of our construction.
Shweta Agrawal, Simran Kumari, Ryo Nishimaki
ePrint Report
The recent works of Ananth et al. (ITCS 2022) and Bartusek et al. (Eurocrypt 2023) initiated the study of pre-constrained cryptography which achieves meaningful security even against the system authority. In this work we significantly expand this area by defining several new primitives and providing constructions from simple, standard assumptions as follows.
- Pre-Constrained Encryption. We define a weaker notion of pre-constrained encryption (PCE), as compared to the work of Ananth et al. which nevertheless suffices for all known applications. We then provide constructions for general constraints, satisfying malicious security from a variety of assumptions including DDH, LWE, QR and DCR. Our LWE based construction satisfies unconditional security against malicious authorities. In contrast, the construction by Ananth et al. supporting general constraints must rely (inherently) on strong assumptions like indistinguishability obfuscation. - Pre-Constrained Static Functional Encryption and Input Obfuscation. We provide a new definition for pre-constrained functional encryption in the so-called static setting (PCSFE) where the functions to be embedded in secret keys are specified during the setup phase. We provide constructions for PCSFE supporting general constraints, with security against malicious authorities. As in the case of PCE, our first construction can be instantiated from a variety of assumptions including DDH, LWE, QR and DCR. Our second, LWE based construction satisfies unconditional security against malicious authorities. We also study succinctness in PCSFE, where the public key is sublinear in the number of function keys. We provide the first construction from LWE in the random oracle model. We additionally provide a heuristic construction in the standard model using lattices together with groups. - Pre-Constrained Input Obfuscation. We define and provide the first construction of pre-constrained input obfuscation from the same assumptions as those used to instantiate PCSFE. - Pre-Constrained Group Signatures. For pre-constrained group signatures (PCGS), we provide the first construction supporting general constraints, achieving unconditional security against malicious authorities from the LWE assumption. The only other construction by Bartusek et al. supports the restricted set/database membership constraint, and achieves computational security from the DDH assumption (and is therefore quantum insecure).
- Pre-Constrained Encryption. We define a weaker notion of pre-constrained encryption (PCE), as compared to the work of Ananth et al. which nevertheless suffices for all known applications. We then provide constructions for general constraints, satisfying malicious security from a variety of assumptions including DDH, LWE, QR and DCR. Our LWE based construction satisfies unconditional security against malicious authorities. In contrast, the construction by Ananth et al. supporting general constraints must rely (inherently) on strong assumptions like indistinguishability obfuscation. - Pre-Constrained Static Functional Encryption and Input Obfuscation. We provide a new definition for pre-constrained functional encryption in the so-called static setting (PCSFE) where the functions to be embedded in secret keys are specified during the setup phase. We provide constructions for PCSFE supporting general constraints, with security against malicious authorities. As in the case of PCE, our first construction can be instantiated from a variety of assumptions including DDH, LWE, QR and DCR. Our second, LWE based construction satisfies unconditional security against malicious authorities. We also study succinctness in PCSFE, where the public key is sublinear in the number of function keys. We provide the first construction from LWE in the random oracle model. We additionally provide a heuristic construction in the standard model using lattices together with groups. - Pre-Constrained Input Obfuscation. We define and provide the first construction of pre-constrained input obfuscation from the same assumptions as those used to instantiate PCSFE. - Pre-Constrained Group Signatures. For pre-constrained group signatures (PCGS), we provide the first construction supporting general constraints, achieving unconditional security against malicious authorities from the LWE assumption. The only other construction by Bartusek et al. supports the restricted set/database membership constraint, and achieves computational security from the DDH assumption (and is therefore quantum insecure).
Ngoc Khanh Nguyen, Gregor Seiler
ePrint Report
In this paper, we propose Greyhound, the first concretely efficient polynomial commitment scheme from standard lattice assumptions. At the core of our construction lies a simple three-round protocol for proving evaluations for polynomials of bounded degree $N$ with verifier time complexity $O(\sqrt{N})$. By composing it with the LaBRADOR proof system (CRYPTO 2023), we obtain a succinct proof of polynomial evaluation (i.e. polylogarithmic in $N$) that admits a sublinear verifier runtime.
To highlight practicality of Greyhound, we provide implementation details including concrete sizes and runtimes. Notably, for large polynomials of degree at most $N=2^{30}$, the scheme produces evaluation proofs of size $53$KB, which is more than $10^4$ times smaller than the recent lattice-based framework, called SLAP (EUROCRYPT 2024), and around three orders of magnitude smaller than Ligero (CCS 2017) and Brakedown (CRYPTO 2023).
To highlight practicality of Greyhound, we provide implementation details including concrete sizes and runtimes. Notably, for large polynomials of degree at most $N=2^{30}$, the scheme produces evaluation proofs of size $53$KB, which is more than $10^4$ times smaller than the recent lattice-based framework, called SLAP (EUROCRYPT 2024), and around three orders of magnitude smaller than Ligero (CCS 2017) and Brakedown (CRYPTO 2023).
Sohto Chiku, Keisuke Hara, Junji Shikata
ePrint Report
Identity-based matchmaking encryption (IB-ME) is an advanced encryption scheme that enables a sender and a receiver to specify each of identity. In general, from the aspect of abilities for adversaries, we have two flavors of security for encryption schemes chosen plaintext attacks (CPA) security and chosen ciphertext attacks (CCA) security. Compared to CPA security, CCA security can capture active adversaries, then it has been recognized as a desirable one.
In this paper, we investigate the CCA security for IB-ME. Concretely, we provide the following three contributions. (i) A method to obtain a CCA secure IB-ME scheme in the standard model based on our new primitive called hierarchical IB-ME (HIB-ME) along with strong one-time signature. (ii) A construction of HIB-ME based on hierarchical identity-based encryption and hierarchical identity-based signature. (iii) A variant of the first method to get an IB-ME scheme satisfying slightly tweaked CCA security solely based on a CPA secure IB-ME scheme (without strong one-time signature). We believe that this new type of CCA security is a reasonable one for IB-ME.
In this paper, we investigate the CCA security for IB-ME. Concretely, we provide the following three contributions. (i) A method to obtain a CCA secure IB-ME scheme in the standard model based on our new primitive called hierarchical IB-ME (HIB-ME) along with strong one-time signature. (ii) A construction of HIB-ME based on hierarchical identity-based encryption and hierarchical identity-based signature. (iii) A variant of the first method to get an IB-ME scheme satisfying slightly tweaked CCA security solely based on a CPA secure IB-ME scheme (without strong one-time signature). We believe that this new type of CCA security is a reasonable one for IB-ME.
Rafaël del Pino, Shuichi Katsumata, Thomas Prest, Mélissa Rossi
ePrint Report
This paper presents Raccoon, a lattice-based signature scheme submitted to the NIST 2022 call for additional post-quantum signatures. Raccoon has the specificity of always being masked. Concretely, all sensitive intermediate values are shared into ? parts. The main design rationale of Raccoon is to be easy to mask at high orders, and this dictated most of its design choices, such as the introduction of new algorithmic techniques for sampling small errors. As a result, Raccoon achieves a masking overhead $?(? \log ?)$ that compares favorably with the overheads $?(?^2 \log ?)$ observed when masking standard lattice signatures. In addition, we formally prove the security of Raccoon in the ?-probing model: an attacker is able to probe $? ≤ ? −1$ shares during each execution of the main algorithms (key generation, signing, verification). While for most cryptographic schemes, the black-box ?-probing security can be studied in isolation, in Raccoon this analysis is performed jointly. To that end, a bridge must be made between the black-box game-based EUF-CMA proof and the usual simulation proofs of the ISW model (CRYPTO 2003). We formalize an end-to-end masking proof by deploying the probing EUF-CMA introduced by Barthe et al.(Eurocrypt 2018) and exhibiting the simulators of the non-interference properties (Barthe et al. CCS 2016). The proof is divided into three novel parts:
- a simulation proof in the ISW model that allows to propagate the dependency to a restricted number of inputs and random coins,
- a game-based proof showing that the security of Raccoon with probes can be reduced to an instance of Raccoon with smaller parameters,
- a parameter study to ensure that the smaller instance is secure, using a robust generalization of the Rényi divergence.
While we apply our techniques to Raccoon, we expect that the algorithmic and proof techniques we introduce will be helpful for the design and analysis of future masking-friendly schemes.
Fredrik Meisingseth, Christian Rechberger
ePrint Report
In the last fifteen years, there has been a steady stream of works combining differential privacy with various other cryptographic disciplines, particularly that of multi-party computation, yielding both practical and theoretical unification. As a part of that unification, due to the rich definitional nature of both fields, there have been many proposed definitions of differential privacy adapted to the given use cases and cryptographic tools at hand, resulting in computational and/or distributed versions of differential privacy. In this work, we offer a systemization of such definitions, with a focus on definitions that are both computational and tailored for a multi-party setting. We order the definitions according to the distribution model and computational perspective and propose a viewpoint on when given definitions should be seen as instantiations of the same generalised notion. The ordering highlights a clear, and sometimes strict, hierarchy between the definitions, where utility (accuracy) can be traded for stronger privacy guarantees or lesser trust assumptions. Further, we survey theoretical results relating the definitions to each other and extend some such results. We also discuss the state of well-known open questions and suggest new open problems to study. Finally, we consider aspects of the practical use of the different notions, hopefully giving guidance also to future applied work.