International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

04 October 2024

Ali Şah Özcan, Erkay Savaş
ePrint Report ePrint Report
HEonGPU is a high-performance library designed to optimize Fully Homomorphic Encryption (FHE) operations on Graphics Processing Unit (GPU). By leveraging the parallel processing capac- ity of GPUs, HEonGPU significantly reduces the computational overhead typically associated with FHE by executing complex operation concurrently. This allows for faster execution of homomorphic computations on encrypted data, enabling real-time applications in privacy-preserving machine learn- ing and secure data processing. A key advantage of HEonGPU lies in its multi-stream architecture, which not only allows parallel processing of tasks to improve throughput but also eliminates the over- head of data transfers between the host device (i.e., CPU) and GPU. By efficiently managing data within the GPU using multi-streams, HEonGPU minimizes the need for repeated memory transfers, further enhancing performance. HEonGPU’s GPU-optimized design makes it ideal for large-scale encrypted computations, providing users with reduced latency and higher performance across various FHE schemes.
Expand
Viet Tung Hoang, Sanketh Menda
ePrint Report ePrint Report
There has been a recent interest to develop and standardize Robust Authenticated Encryption (Robust AE) schemes. NIST, for example, is considering an Accordion mode (a wideblock tweakable blockcipher), with Robust AE as a primary application. On the other hand, recent attacks and applications suggest that encryption needs to be committing. Indeed, committing security isalso a design consideration in the Accordion mode. Yet it is unclear how to build a Robust AE with committing security.

In this work, we give a modular solution for this problem. We first show how to transform any wideblock tweakable blockcipher TE to a Robust AE scheme SE that commits just the key. The overhead is cheap, just a few finite-field multiplications and blockcipher calls. If one wants to commit the entire encryption context, one can simply hash the context to derive a 256-bit subkey, and uses SE on that subkey. The use of 256-bit key on SE only means that it has to rely on AES-256 but doesn't require TE to have 256-bit key.

Our approach frees the Accordion designs from consideration of committing security. Moreover, it gives a big saving for several key-committing applications that don't want to pay the inherent hashing cost of full committing.
Expand
Théophile Brézot, Chloé Hébant
ePrint Report ePrint Report
State-of-the-art database implementations offer a wide range of functionalities and impressive performances while supporting highly concurrent loads. However they all rely on the server knowing the content of the database, which raises issues when sensitive information is being stored on a server that cannot be trusted. Encrypting documents before sending them to a remote server solves the confidentiality issue at the cost of loosing the keyword search functionality. Cryptographic primitives such as Symmetric Searchable Encryption (SSE) schemes have been proposed to recover this functionality. However, no SSE construction properly defines correctness and successfully guarantees security in a concurrent setting. This paper attempts a first step in this direction by recommending linearizability as the standard notion of correctness for a concurrent SSE. We study the impact of concurrency on security and stress the need for finer-grained security models. Hence, we propose the log-security model that guarantees security against an adversary having access to persistency-related logs, fixing a blind spot in the snapshot model while capturing security in a concurrent setting. We also build the first concurrent SSE solution proven correct and secure in a concurrent setting, that can be implemented on top of any database. Our scheme proved to be fast thanks to optimal wait-free search operations and sequentially-optimal, lock-free modifications, that both execute under one micro-second per binding, while only incurring a 13.3% storage expansion.
Expand
Pedram Hosseyni, Ralf Küsters, Tim Würtele
ePrint Report ePrint Report
FAPI 2.0 is a suite of Web protocols developed by the OpenID Foundation's FAPI Working Group (FAPI WG) for third-party data sharing and digital identity in high-risk environments. Even though the specifications are not completely finished, several important entities have started to adopt the FAPI 2.0 protocols, including Norway's national HelseID, Australia's Consumer Data Standards, as well as private companies like Authlete and Australia-based connectID; the predecessor FAPI 1.0 is in widespread use with millions of users.

The FAPI WG asked us to accompany the standardization of the FAPI 2.0 protocols with a formal security analysis to proactively identify vulnerabilities before widespread deployment and to provide formal security guarantees for the standards. In this paper, we report on our analysis and findings.

Our analysis is based on a detailed model of the Web infrastructure, the so-called Web Infrastructure Model (WIM), which we extend to be able to carry out our analysis of the FAPI 2.0 protocols including important extensions like FAPI-CIBA. Based on the (extended) WIM and formalizations of the security goals and attacker model laid out in the FAPI 2.0 specifications, we provide a formal model of the protocols and carry out a formal security analysis, revealing several attacks. We have worked with the FAPI WG to fix the protocols, resulting in several amendments to the specifications. With these changes in place, we have adjusted our protocol model and formally proved that the security properties hold true under the strong attacker model defined by the FAPI WG.
Expand
Taiga Hiroka, Tomoyuki Morimae
ePrint Report ePrint Report
In classical cryptography, one-way functions (OWFs) are the minimal assumption, while recent active studies have demonstrated that OWFs are not necessarily the minimum assumption in quantum cryptography. Several new primitives have been introduced such as pseudorandom unitaries (PRUs), pseudorandom function-like state generators (PRFSGs), pseudorandom state generators (PRSGs), one-way state generators (OWSGs), one-way puzzles (OWPuzzs), and EFI pairs. They are believed to be weaker than OWFs, but they still imply many useful applications such as private-key quantum money schemes, secret-key encryption, message authentication codes, digital signatures, commitments, and multiparty computations. Now that the possibility of quantum cryptography without OWFs has opened up, the most important goal in the field is to provide them with concrete instantiations. For example, in classical cryptography, there are many instantiations of OWFs based on concrete hardness assumptions, such as the hardness of discrete logarithms or learning with errors. The study of generic primitives is justified by the existence of concrete instantiations. On the other hand, in quantum cryptography, all known constructions of those primitives are only from OWFs. We therefore have the following important open problem: Do they have instantiations based on some concrete hardness assumptions that will not imply OWFs? Ideally, the assumptions should be the ones that are studied in other contexts than cryptography. In this paper, we give a candidate answer to the question by showing that quantum-average-hardness of GapK problem implies the existence of OWPuzzs. A GapK problem is a promise problem to decide whether a given bit string has a small Kolmogorov complexity or not. Its quantum-average-hardness means that an instance is sampled from a quantum-polynomial-time-samplable distribution, and no quantum-polynomial-time algorithm can solve the problem with high probability. As far as we know, this is the first time that a ``Microcrypt'' primitive is constructed based on concrete hardness assumptions that do not seem to imply OWFs. Moreover, the assumptions are studied in other contexts than cryptography, especially in the field of meta-complexity. (Note: During the preparation of this manuscript, Khurana and Tomer \cite{cryptoeprint:2024/1490} uploaded a concurrent work.)
Expand
Maggie Yongqi Guan, Yaman Yu, Tanusree Sharma, Molly Zhuangtong Huang, Kaihua Qin, Yang Wang, Kanye Ye Wang
ePrint Report ePrint Report
Stablecoins, a type of cryptocurrency pegged to another asset to maintain a stable price, have become an important part of the cryptocurrency ecosystem. Prior studies have primarily focused on examining the security of stablecoins from technical and theoretical perspectives, with limited investigation into users’ risk perceptions and security behaviors in stablecoin practices. To address this research gap, we conducted a mixed-method study that included constructing a stablecoin interaction framework based on the literature, which informed the design of our interview protocol, semi-structured interviews (n=21), and Reddit data analysis (9,326 posts). We found that participants see stable value and regulatory compliance as key security advantages of stablecoins over other cryptocurrencies. However, participants also raised concerns about centralization risks in fiat-backed stablecoins, perceived challenges in crypto-backed stablecoins due to limited reliance on fully automated execution, and confusion regarding the complex mechanisms of algorithmic stablecoins. We proposed improving user education and optimizing mechanisms to address these concerns and promote the safer use of stablecoins.
Expand
Janik Huth, Antoine Joux
ePrint Report ePrint Report
In this paper, we introduce a new method to construct a signature scheme based on the subfield bilinear collision problem published at Crypto 2024. We use techniques based on vector oblivious linear evaluation (VOLE) to significantly improve the running time and signature size of the scheme compared to the MPC-in-the-head version.
Expand
Tomoyuki Morimae, Yuki Shirakawa, Takashi Yamakawa
ePrint Report ePrint Report
Quantum computational advantage refers to an existence of computational tasks that are easy for quantum computing but hard for classical one. Unconditionally showing quantum advantage is beyond our current understanding of complexity theory, and therefore some computational assumptions are needed. Which complexity assumption is necessary and sufficient for quantum advantage? In this paper, we show that inefficient-verifier proofs of quantumness (IV-PoQ) exist if and only if classically-secure one-way puzzles (OWPuzzs) exist. As far as we know, this is the first time that a complete cryptographic characterization of quantum advantage is obtained. IV-PoQ capture various types of quantum advantage previously studied, such as sampling advantage and searching advantage. Previous work [Morimae and Yamakawa 2024] showed that IV-PoQ can be constructed from OWFs, but a construction of IV-PoQ from weaker assumptions was left open. Our result solves the open problem. OWPuzzs are one of the most fundamental quantum cryptographic primitives implied by many quantum cryptographic primitives weaker than one-way functions (OWFs). The equivalence between IV-PoQ and classically-secure OWPuzzs therefore highlights that if there is no quantum advantage, then these fundamental primitives do not exist. The equivalence also means that quantum advantage is an example of the applications of OWPuzzs. Except for commitments, no application of OWPuzzs was known before. Our result shows that quantum advantage is another application of OWPuzzs. Moreover, it is the first quantum computation classical communication (QCCC) application of OWPuzzs.
Expand
Xingye Lu, Jingjing Fan, Man Ho AU
ePrint Report ePrint Report
In this paper, we introduce the notion of relaxed lattice-based programmable hash function (RPHF), which is a novel variant of lattice-based programmable hash functions (PHFs). Lattice-based PHFs, together with preimage trapdoor functions (TDFs), have been widely utilized (implicitly or explicitly) in the construction of adaptively secure identity-based encryption (IBE) schemes. The preimage length and the output length of the underlying PHF and TDF together determine the user secret key and ciphertext lengths of the IBE schemes. However, the current lattice-based PHF definition imposes the restriction that the preimage length of TDF in the IBE schemes cannot be too short, hindering the utilization of size-efficient NTRU TDF. To overcome this hurdle, RPHF relaxes the hash key distribution requirement in the definition of PHF from statistical indistinguishability to computational indistinguishability. This relaxation eliminates limitations on the preimage length of underlying TDFs in IBE, enabling the construction of IBEs from NTRU TDFs. We introduce two instantiations of RPHF: the first produces a hash output length of $2$ ring elements, with a hash key size linear to the input length, and the second yields an output length of $14$ ring elements, with a hash key size proportional to the square root of the input length. Building upon these RPHF instantiations, we propose two adaptively secure lattice-based IBE schemes with ciphertext lengths of $5$ and $17$ ring elements and user secret key lengths of $4$ and $16$ ring elements, respectively. The length of the IBE master public key is roughly equivalent to the size of the hash key of the underlying RPHF. In comparison to existing IBE constructions, our proposed schemes achieve a significant reduction (over an order of magnitude) in ciphertext and secret key sizes. Notably, state-of-the-art constructions from ideal lattices exhibit secret key and ciphertext sizes over 100 ring elements, making our proposed schemes highly efficient. Moreover, the master public key sizes of our IBEs remain comparable.
Expand
Leo de Castro, Duhyeong Kim, Miran Kim, Keewoo Lee, Seonhong Min, Yongsoo Song
ePrint Report ePrint Report
We present a new and efficient method to obtain circuit privacy for lattice-based linearly homomorphic encryptions (LHE). In particular, our method does not involve noise-flooding with exponetially large errors or iterative bootstrapping. As a direct result, we obtain a semi-honest oblivious linear evaluation (OLE) protocol with the same efficiency, reducing the communication cost of the prior state of the art by 50%. Consequently, the amortized time of our protocol improves the prior work by 33% under 100Mbps network setting. Our semi-honest OLE is the first to achieve both concrete efficiency and asymptotic quasi-optimality. Together with an extension of the recent zero-knowledge proof of plaintext knowledge, our LHE yields actively-secure OLE with 2.7x reduced communication from the prior work. When applied to Overdrive (Eurocrypt '18), an MPC preprocessing protocol, our method provides 1.4x improvement in communication over the state of the art.
Expand
Jan Bormet, Sebastian Faust, Hussien Othman, Ziyan Qu
ePrint Report ePrint Report
In decentralized finance (DeFi), the public availability of pending transactions presents significant privacy concerns, enabling market manipulation through miner extractable value (MEV). MEV occurs when block proposers exploit the ability to reorder, omit, or include transactions, causing financial loss to users from frontrunning. Recent research has focused on encrypting pending transactions, hiding transaction data until block finalization. To this end, Choudhuri et al. (USENIX '24) introduce an elegant new primitive called Batched Threshold Encryption (BTE) where a batch of encrypted transactions is selected by a committee and only decrypted after block finalization. Crucially, BTE achieves low communication complexity during decryption and guarantees that all encrypted transactions outside the batch remain private. An important shortcoming of their construction is, however, that it progresses in epochs and requires a costly setup in MPC for each batch decryption. In this work, we introduce a novel BTE scheme addressing the limitations by eliminating the need for an expensive epoch setup while achieving practical encryption and decryption times. Additionally, we explore the problem of how users can coordinate their transactions, which is crucial for the functionality of the system. Along the way, we present several optimizations and trade-offs between communication and computational complexity that allow us to achieve practical performance on standard hardware ($<2$ ms for encryption and $<440$ ms for decrypting $512$ transactions). Finally, we prove our constructions secure in a model that captures practical attacks on MEV-prevention mechanisms.
Expand
Fei Xu, Honggang Hu, Changhong Xu
ePrint Report ePrint Report
At Eurocrypt 2015, Zahur, Rosulek, and Evans proposed the model of Linear Garbling Schemes. This model proved a $2\kappa$-bit lower bound of ciphertexts for a broad class of garbling schemes. Since then, several methods have been developed that bypass this lower bound, albeit with a notable limitation: Their reliance on specifically correlated input wire labels restricts their applicability across most gates. At Crypto 2021, Rosulek and Roy presented the innovative "three-halves" garbling scheme in which AND gates cost $1.5\kappa+5$ bits and XOR gates are free. A noteworthy aspect of their scheme is the slicing-and-dicing technique, which is applicable universally to all AND gates when garbling a boolean circuit. Following this revelation, Rosulek and Roy presented several open problems. Our research primarily addresses one of them: ``Is $1.5\kappa$ bits optimal for garbled AND gates in a more inclusive model than Linear Garbling Schemes?''

In this paper, we propose the Bitwise Garbling Schemes, a model that seamlessly incorporates the slicing-and-dicing technique. Our key revelation is that $1.5\kappa$ bits is indeed optimal for arbitrary garbled AND gates in our model. Since Rosulek and Roy also suggested another problem which questions the necessity of free-XOR, we explore constructions without free-XOR and prove a $2\kappa$-bit lower bound. Therefore, sacrificing compatibility with free-XOR does not lead to a more efficient scheme.
Expand
Albert Garreta, Ignacio Manzur
ePrint Report ePrint Report
We introduce two folding schemes for lookup instances: FLI and FLI+SOS. Both use a PIOP to check that a matrix has elementary basis vectors as rows, with FLI+SOS adding a twist based on Lasso’s SOS-decomposability.

FLI takes two lookup instances $\{\mathbf{a}_1\}, \{\mathbf{a}_2\}\subseteq\mathbf{t}$, and expresses them as matrix equations $M_i\cdot\mathbf{t}^\mathsf{T}=\mathbf{a}_i^\mathsf{T}$ for $i=1,2$, where each matrix $M_i\in\mathbb{F}^{m\times N}$ has rows which are elementary basis vectors in $\mathbb{F}^N$. Matrices that satisfy this condition are said to be in $R_{\mathsf{elem}}$. Then, a folding scheme for $R_{\mathsf{elem}}$ into a relaxed relation is used, which combines the matrices $M_1, M_2$ as $M_1+\alpha M_2$ for a random $\alpha\in\mathbb{F}$. Finally, the lookup equations are combined as $(M_1+\alpha M_2)\cdot \mathbf{t}^{\mathsf{T}} = (\mathbf{a}_1+\alpha \mathbf{a}_2)^\mathsf{T}$. In FLI, only the property that a matrix is in $R_{\mathsf{elem}}$ is folded, and this makes the FLI folding step the cheapest among existing solutions. The price to pay is in the cost for proving accumulated instances.

FLI+SOS builds upon FLI to enable folding of large SOS-decomposable tables. This is achieved through a variation of Lasso's approach to SOS-decomposability, which fits FLI naturally. For comparison, we describe (for the first time to our knowledge) straightforward variations of Protostar and Proofs for Deep Thought that also benefit from SOS-decomposability. We see that for many reasonable parameter choices, and especially those arising from lookup-based zkVMs, FLI+SOS can concretely be the cheapest folding solution.
Expand
Joan Boyar, Simon Erfurth
ePrint Report ePrint Report
Folding schemes are an exciting new primitive, transforming the task of performing multiple zero-knowledge proofs of knowledge for a relation into performing just one zero-knowledge proof, for the same relation, and a number of cheap inclusion-proofs. Recently, folding schemes have been used to amortize the cost associated with proving different statements to multiple distinct verifiers, which has various applications. We observe that for these uses, leaking information about the statements folded together can be problematic, yet this happens with previous constructions. Towards resolving this issue, we give a natural definition of privacy preserving folding schemes, and what security they should offer. To construct privacy preserving folding schemes, we first define a statement hiders, a primitive which might be of independent interest. In a nutshell, a statement hider hides an instance of a relation as a new instance in the same relation. The new instance is in the relation if and only if the initial instance is. With this building block, we can utilize existing folding schemes to construct a privacy preserving folding scheme, by first hiding each of the statements. Folding schemes allow verifying that a statement was folded into another statement, while statement hiders allow verifying that a statement was hidden as another statement.
Expand
Karim Eldefrawy, Benjamin Terner, Moti Yung
ePrint Report ePrint Report
Time-lock puzzles are unique cryptographic primitives that use computational complexity to keep information secret for some period of time, after which security expires. This topic, while over 25 years old, is still in a state where foundations are not well understood: For example, current analysis techniques of time-lock primitives provide no sound mechanism to build composed multi-party cryptographic protocols which use expiring security as a building block. Further, there are analyses that employ idealizations and simulators of unrealistic computational power to be an acceptable sound security argument. Our goal with this short paper is to advocate for understanding what approaches may lead to sound modeling beyond idealization, and what approaches may, in fact, be hopeless at this task of sound modeling.

We explain in this paper how existing attempts at this subtle problem lack either composability, a fully consistent analysis, or functionality. The subtle flaws in the existing frameworks reduce to an impossibility result by Mahmoody et al., who showed that time-lock puzzles with super-polynomial gaps (between committer and solver) cannot be constructed from random oracles alone (or any repetitive computation where the next state is completely random given the prior state); yet still the analyses of algebraic puzzles today treat the solving process as if each step is a generic or random oracle. We point out that if the generation process relies on a trapdoor function that cannot be treated as a random oracle (to allow efficient generation while avoiding this impossibility result), then, to be consistent, the analysis of the solving process should also not treat such a trapdoor function (and its intermediate states) as a random oracle.

We also delineate additional issues with the proof techniques used for time-lock puzzles. Specifically, when a time-lock puzzle must retain privacy for some amount of time, the reduction should bound the running time of the simulator. A simulator that can ``simulate" if given time that if given to an adversary allows said adversary to solve the puzzle is not a valid security argument. We survey the adherence of various attempts to this principle, as well as the properties that different attempts achieve toward composition.
Expand

01 October 2024

Research Institute CODE, Universität der Bundeswehr Munich, Germany
Job Posting Job Posting
We are looking for 1-2 bright researchers with strong interest and suitable experience in any of the following research areas:
  • Distributed cryptography: DKG, decentralised credentials with privacy properties
  • Advanced encryption: algorithmic techniques for FHE and SNARKs, updatable encryption
  • Secure computation: MPC techniques and protocol design, PSI
  • PQC techniques for any of the aforementioned areas
Candidates will lead our research in ongoing / upcoming funded projects across the domains of decentralized digital identity, computing over encrypted data, and advanced post-quantum secure encryption, and can work closely with existing PhD students.

They will work closely with members of the Privacy and Applied Cryptography (PACY) lab, led by Prof. Mark Manulis, and the Quantum-Safe and Advanced Cryptography (QuSAC) lab, led by Prof. Daniel Slamanig. Candidates will benefit from our modern infrastructure and availability of funds to support own research. Also, Munich is amongst best places to live in Germany.

Positions are available for immediate start (~58k to 74k EUR p.a. depending on qualifications and experience). Initial contracts are for 1.5 - 2 years. (Due to the nature of funding restrictions on the eligibility may apply.)

Requirements:
  • Master's degree (or equivalent) or PhD in Mathematics, Cryptography, or Computer Science with excellent grades
  • Solid knowledge and demonstrable experience in any of the aforementioned research areas
  • Post-doc candidates must have a strong track record (ideally with publications at IACR conferences and/or the top 4 security conferences) and good academic writing and presentation skills
  • Experience with cryptographic implementations (desirable)
  • Proficiency in English (essential) and German (desirable but not essential)
Applications (cover letter, CV, transcripts, contacts for references) can be emailed to Prof. Mark Manulis. Applications will be processed continuously until the positions are filled.

Closing date for applications:

Contact: Mark Manulis (mark.manulis AT unibw.de)

More information: https://www.unibw.de/pacy-en/vacancies

Expand

30 September 2024

CryptoLux team, University of Luxembourg
Job Posting Job Posting
The University of Luxembourg (UL) is seeking to hire a Post-doctoral fellow within the CryptoLux Group of the University of Luxembourg led by Prof. Alex Biryukov. The position is initially for two years, extendable up to five years. Yearly gross salary for a Postdoctoral Researcher at the UL is EUR 83K. The main application domains are:
  • Cryptanalysis and Design of Symmetric Primitives (for ex. Lightweight/Ultra-lightweight)
  • Algebraic and Arithmetization-friendly cryptography for ZK and MPC applications
  • Post-quantum cryptography
  • Privacy enchancing technologies
The main responsibility of the successful candidate would be to:
  • Lead high-impact research in the fields of Cryptography and Privacy
  • Perform teaching activities in fields of expertise at bachelor, master and PhD levels (max 90 hours/year, less in practice)
  • Help with daily supervision of Ph.D and M.Sc. students
  • Participate to the writing of research grants
Candidate profile:
  • PhD degree in Computer Science or related fields (e.g., applied mathematics, etc.)
  • Proven research track record (publications in A* venues in Cryptography or Information Security)
  • Great communication skills
  • Commitment, team working, interpersonal skills and a critical mind
  • Good written and oral skills in English
Deadline for applications: 15-Nov 2024. Starting date: 1-Feb-2025. Early application is highly encouraged, as the applications will be processed upon reception.

Closing date for applications:

Contact: http://emea3.mrted.ly/3qt46

More information: https://cryptolux.org/index.php/Vacancies

Expand
CryptoLux team, University of Luxembourg
Job Posting Job Posting
The successful candidate will join the CryptoLux team led by Prof. Alex Biryukov. He or she will contribute to a research project "Advanced Cryptography for Finance and Privacy (CryptoFin)", which is funded by the Luxembourgish Fonds National de la Recherche (FNR) through the CORE program. Position is initially for one year, extendable up to five years. Yearly gross salary for a Postdoctoral Researcher at the UL is EUR 83K. Candidates with research interests in one or more of the following areas are particularly encouraged to apply:
  • Applied or symmetric cryptography
  • Blockchain cryptography, cryptoeconomics, DeFi
  • Anonymity and privacy, ZK, PETs
The main responsibility of the successful candidate would be to:
  • Conduct, publish and present research results at the top conferences/journals
  • Collaborate with the two Ph.D. students of the project
  • Attract funding in cooperation with academic and industrial partners
Deadline for applications: 31-October 2024. Starting date: 1-Jan-2025

Closing date for applications:

Contact: http://emea3.mrted.ly/3p6l5

More information: https://cryptolux.org/index.php/Vacancies

Expand
Gavin Cho, Georg Fuchsbauer, Adam O'Neill
ePrint Report ePrint Report
We show that the Schnorr signature scheme meets existential unforgeability under chosen-message attack (EUF-CMA) in the random oracle model (ROM) if the circular discrete-logarithm (CDL) assumption, a new, non-interactive variant of DL we introduce, holds in the underlying group. Our reduction is completely tight, meaning the constructed adversary against CDL has both essentially the same running time and success probability as the assumed forger. To our knowledge, we are the first to exhibit such a reduction. Previously, Bellare and Dai (INDOCRYPT 2020) showed the scheme is EUF-CMA the ROM if their multi-base DL assumption holds in the underlying group. However, multi-base DL is interactive; moreover, their reduction, while tighter than the initial result of Pointcheval and Stern (EUROCRYPT 1996), still incurs a security loss that is linear in the number of the adversary’s RO queries. We justify CDL by showing it holds in two carefully chosen idealized models, which idealize different aspects of our assumption. Our quantitative bounds in these models are essentially the same as for DL, giving strong evidence that CDL is as hard DL in appropriate elliptic-curve groups groups.
Expand
Peng Wang, Shuping Mao, Ruozhou Xu, Jiwu Jing, Yuewu Wang
ePrint Report ePrint Report
XCB, a tweakable enciphering mode, is part of IEEE Std. 1619.2 for shared storage media. We show that all versions of XCB are not secure through three plaintext recovery attacks. A key observation is that XCB behaves like an LRW1-type tweakable block cipher for single-block messages, which lacks CCA security. The first attack targets one-block XCB, using three queries to recover the plaintext. The second one requires four queries to recover the plaintext that excludes one block. The last one requires seven queries to recover the full plaintext. The first attack applies to any scheme that follows the XCB structure, whereas the latter two attacks work on all versions of XCB, exploiting the separable property of the underlying universal hash function. We also discuss the impact of these vulnerabilities on XCB-based applications, such as disk encryption, nonce-based encryption, deterministic authenticated encryption and robust authenticated encryption, highlighting the risks due to XCB's failure to achieve STPRP security. To address these flaws, we propose the XCB* structure, an improved version of XCB that adds only two XOR operations. We prove that XCB* is STPRP-secure when using AXU hash functions, SPRPs, and a secure IV-based stream cipher.
Expand
◄ Previous Next ►