International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

24 October 2022

Esra Günsay, Oğuz Yayla
ePrint Report ePrint Report
Secure and scalable data sharing is one of the main concerns of the Internet of Things (IoT) ecosystem. In this paper, we introduce a novel blockchain-based data-sharing construction designed to ensure full anonymity for both the users and the data. To share the encrypted IoT data stored on the cloud, users generate tokens, prove their ownership using zk-SNARKs, and anonymously target the destination address. To tackle the privacy concerns arising from uploading the data to the cloud, we use key-private re-encryption and share as little information as possible with the proxy. Furthermore, we provide security proof of our construction.
Expand
Carlos Aguilar-Melchor, Jean-Christophe Deneuville, Arnaud Dion, James Howe, Romain Malmain, Vincent Migliore, Mamuri Nawan, Kashif Nawaz
ePrint Report ePrint Report
While hardware implementations allow the production of highly efficient and performance oriented designs, exploiting features such as parallelization, their longer time to code and implement often bottlenecks rapid prototyping. On the other hand, high-level synthesis (HLS) tools allow for faster experimentation of software code to a hardware platform while demonstrating a reasonable extrapolation of the expected hardware behavior. In this work, we attempt to show a rapid, fast prototyping of the well known HQC algorithm, using HLS, and show how with a modification of certain parameters, varying degrees of comparable results can be obtained. These results, in turn, could be used as a guide for HDL-RTL developers to enhance their designs and better prototyping time in the future. Additionally, we also demonstrate that it is possible to benefit from HQC's versatility; by achieving a low hardware footprint whilst also maintaining good performances, even on low-cost FPGA devices, which we demonstrate on the well known Artix-7 xc7a100t-ftg256-1.
Expand
David W. Kravitz, Mollie Z. Halverson
ePrint Report ePrint Report
Traditional finance quantifies risk by collecting and vetting reputation information for an individual, such as credit scores or payment history. While decentralized finance (DeFi) is an exceptionally well-suited application of permissionless blockchains, it is severely constrained in its ability to reconcile identities and quantify associated transaction risk directly on-chain. Opening the ecosystem to a broad range of use cases requires consistent pseudonymity and quantifiable reputation. This paper defies the status quo: exploring methods of assessing risk on-chain by efficiently integrating off-chain identity- and attribute- verification and on-chain transaction activity. We achieve this while preserving individual privacy within a competitive and fair environment and retaining compatibility with existing platforms such as Ethereum. Even though blockchains are inherently public, our solution gives users control over release of information that pertains to them. Consequently, our contribution focuses on customized methods that balance the degree of disclosure of provably-sourced user information against the likelihood of the user successfully gaining access to a desired resource, such as a loan under suitable terms. Our solution is consistent with the zero-trust model in that it imports explicit trust from recognized sources through relevant metrics that are subject to continuous update.
Expand
Sunoo Park, Nicholas Spooner
ePrint Report ePrint Report
The proof of work mechanism by which many blockchain-based protocols achieve consensus may be undermined by the use of quantum computing in mining—even when all cryptographic primitives are replaced with post-quantum secure alternatives. First, we offer an impossibility result: we prove that quantum (Grover) speedups in solving a large, natural class of proof-of-work puzzles cause an inevitable incentive incompatibility in mining, by distorting the reward structure of mining in proof-of-work-based protocols such as Bitcoin. We refer to such distortion as the Superlinearity Problem. Our impossibility result suggests that for robust post-quantum proof-of-work-based consensus, we may need to look beyond standard cryptographic models. We thus propose a proof-of-work design in a random-beacon model, which is tailored to bypass the earlier impossibility. We conclude with a discussion of open problems, and of the challenges of integrating our new proof-of-work scheme into decentralised consensus protocols under realistic conditions.
Expand
Ismail Afia, Riham AlTawy
ePrint Report ePrint Report
In CT-RSA 2020, P3S was proposed as the first policy-based sanitizable signature scheme which allows the signer to designate future message sanitizers by defining an access policy relative to their attributes rather than their keys. However, since P3S utilizes a policy-based chameleon hash (PCH), it does not achieve unlinkability which is a required notion in privacy-preserving applications. Moreover, P3S requires running a procedure to share the secret trapdoor information for PCH with each new sanitizer before sanitizing a new message. We further observe that in order to maintain the transparency in P3S’s multiple-sanitizers setting, the signature size should grow linearly with the number of sanitizers. In this work, we propose an unlinkable policy-based sanitizable signature scheme (UP3S) where we employ a rerandomizable digital signature scheme and a traceable attribute-based signature scheme as its building blocks. Compared to P3S, UP3S achieves unlinkability, does not require new secrets to be shared with future sanitizers prior to sanitizing each message, and has a fixed signature size for a given sanitization policy. We define and formally prove the security notions of the generic scheme, propose an instantiation of UP3S utilizing the Pointcheval-Sanders rerandomizable signature scheme and DTABS traceable attribute-based signature scheme, and analyze its efficiency. Finally, we compare UP3S with P3S in terms of the features of the procedures, scalability, and security models.
Expand
Shravan Srinivasan, Julian Loss, Giulio Malavolta, Kartik Nayak, Charalampos Papamanthou, Sri AravindaKrishnan Thyagarajan
ePrint Report ePrint Report
Time-lock puzzles (TLP) are a fascinating type of cryptographic problem that is easy to generate, but takes a certain time to solve, even when arbitrary parallel speedup is allowed. TLPs have wide-ranging applications including fairness, round efficient computation, and more. To reduce the effort needed to solve large numbers of TLPs, prior work has proposed batching techniques to reduce the cost of solving. However, these proposals either require: (1) a trusted setup or (2) the puzzle size linear in the maximum batch size, which implies setting an a priori bound on the maximum size of the batch. Any of these limitations restrict the utility of TLPs in decentralized and dynamic settings like permissionless blockchains.

In this work, we demonstrate the feasibility and usefulness of a TLP that overcomes all of the above limitations. Our construction is based on indistinguishable obfuscation and shows that there are no fundamental barriers in achieving such a TLP construction. As a main application of our TLP, we show how to improve the resilience of consensus protocols toward network-level adversaries in the following two settings: (1) We show a generic compiler that boosts the resilience of a Byzantine broadcast protocol $\Pi$ as follows: if $\Pi$ is secure against $t
Expand
Conor McMenamin, Vanesa Daza, Bruno Mazorra
ePrint Report ePrint Report
The always-available liquidity of automated market makers (AMMs) has been one of the most important catalysts in early cryptocurrency adoption. However, it has become increasingly evident that AMMs in their current form are not viable investment options for passive liquidity providers. This is because of the cost incurred by AMMs providing stale prices to arbitrageurs against external market prices, formalized as loss-versus-rebalancing (LVR) [Milionis et al., 2022].

In this paper, we present Diamond, an automated market making protocol that aligns the incentives of liquidity providers and block producers in the protocol-level retention of LVR. In Diamond, block producers effectively auction the right to capture any arbitrage that exists between the external market price of a Diamond pool, and the price of the pool itself. The proceeds of these auctions are shared by the Diamond pool and block producer in a way that is proven to remain incentive compatible for the block producer. Given the participation of competing arbitrageurs, LVR is effectively prevented in Diamond. We formally prove this result, and detail an implementation of Diamond. We also provide comparative simulations of Diamond to relevant benchmarks, further evidencing the LVR-protection capabilities of Diamond. With this new protection, passive liquidity provision on blockchains becomes rationally viable, beckoning a new age for decentralized finance.
Expand
Aarushi Goel, Mathias Hall-Andersen, Gabriel Kaptchuk, Nicholas Spooner
ePrint Report ePrint Report
Building on recent disjunctive compilers for zero-knowledge (e.g. Goel et al. [EUROCRYPT'22]) we propose a new compiler that, when applied to sublinear-sized proofs, can result in sublinear-size disjunctive zero-knowledge with sublinear proving times (without meaningfully increasing proof sizes). Our key observation is that simulation in sublinear-size zero-knowledge proof systems can be much faster (both concretely and asymptotically) than the honest prover. We study applying our compiler to two classes of $O(\log n)$-round protocols: interactive oracle proofs, specifically Aurora [EUROCRYPT'19] and Fractal [EUROCRYPT'20], and folding arguments, specifically Compressed $\Sigma$-protocols [CRYPTO'20, CRYPTO'21] and Bulletproofs [S&P'18]. This study validates that the compiler can lead to significant savings. For example, applying our compiler to Fractal enables us to prove a disjunction of $\ell$ clauses, each of size $N$, with only $O((N+\ell) \cdot \text{polylog}(N))$ computation, versus $O(\ell N \cdot \text{polylog}(N))$ when proving the disjunction directly. We also find that our compiler offers a new lens through which to understand zero-knowledge proofs, evidenced by multiple examples of protocols with the same "standalone" complexity that each behave very differently when stacked.
Expand
Anna M. Johnston, Puru Kulkarni
ePrint Report ePrint Report
DYCE is a simple to analyze algorithm which converts raw entropy into usable cryptographic entropy using the Da Yan (commonly known as the ‘Chinese Remainder Theorem’). This paper describes and analyzes DYCE, giving its detailed algorithmic description.
Expand
Tung Le, Pengzhi Huang, Attila A. Yavuz, Elaine Shi, Thang Hoang
ePrint Report ePrint Report
Storage-as-a-service (STaaS) permits the client to outsource her data to the cloud thereby, reducing data management and maintenance costs. However, STaaS also brings significant data integrity and soundness concerns since the storage provider might not keep the client data intact and retrievable all the time (e.g., cost saving via deletions). Proof of Retrievability (PoR) can validate the integrity and retrievability of remote data effectively. This technique can be useful for regular audits to monitor data compromises, as well as to comply with standard data regulations. In particular, cold storage applications (e.g., MS Azure, Amazon Glacier) require regular and frequent audits but with less frequent data modification. Yet, despite their merits, existing PoR techniques generally focus on other metrics (e.g., low storage, fast update, metadata privacy) but not audit efficiency (e.g., low audit time, small proof size). Hence, there is a need to develop new PoR techniques that achieve efficient data audit while preserving update and retrieval performance.

In this paper, we propose Porla, a new PoR framework that permits efficient data audit, update, and retrieval functionalities simultaneously. Porla permits data audit in both private and public settings, each of which features asymptotically (and concretely) smaller audit-proof size and lower audit time than all the prior works while retaining the same asymptotic data update overhead. Porla achieves all these properties by composing erasure codes with verifiable computation techniques which, to our knowledge, is a new approach to PoR design. We address several challenges that arise in such a composition by creating a new homomorphic authenticated commitment scheme, which can be of independent interest. We fully implemented Porla and evaluated its performance on commodity cloud (i.e., Amazon EC2) under various settings. Experimental results demonstrated that Porla achieves two to four orders of magnitude smaller audit proof size with 4× – 1,800× lower audit time than all prior schemes in both private and public audit settings at the cost of only 2× – 3× slower update.
Expand
Martin Brisfors, Michail Moraitis, Elena Dubrova
ePrint Report ePrint Report
Clock randomization is one of the oldest countermeasures against side-channel attacks. Various implementations have been presented in the past, along with positive security evaluations. However, in this paper we show that it is possible to break countermeasures based on a randomized clock by sampling side-channel measurements at a frequency much higher than the encryption clock, synchronizing the traces with pre-processing, and targeting the beginning of the encryption. We demonstrate a deep learning-based side-channel attack on a protected FPGA implementation of AES which can recover a subkey from less than 500 power traces. In contrast to previous attacks on FPGA implementations of AES which targeted the last round, the presented attack uses the first round as the attack point. Any randomized clock countermeasure is significantly weakened by an attack on the first round because the effect of randomness accumulated over multiple encryption rounds is lost.
Expand
Doreen Riepel, Hoeteck Wee
ePrint Report ePrint Report
Attribute-based encryption (ABE) enables fine-grained access control on encrypted data and has a large number of practical applications. This paper presents FABEO: faster pairing-based ciphertext-policy and key-policy ABE schemes that support expressive policies and put no restriction on policy type or attributes, and the first to achieve optimal, adaptive security with multiple challenge ciphertexts. We implement our schemes and demonstrate that they perform better than the state-of-the-art (Bethencourt et al. S&P 2007, Agrawal et al., CCS 2017 and Ambrona et al., CCS 2017) on all parameters of practical interest.
Expand
Nilanjan Datta, Avijit Dutta, Shibam Ghosh
ePrint Report ePrint Report
The INT-RUP security of an authenticated encryption (AE) scheme is a well studied problem which deals with the integrity security of an AE scheme in the setting of releasing unverified plaintext model. Popular INT-RUP secure constructions either require a large state (e.g. GCM-RUP, LOCUS, Oribatida) or employ a two-pass mode (e.g. MON- DAE) that does not allow on-the-fly data processing. This motivates us to turn our attention to feedback type AE constructions that allow small state implementation as well as on-the-fly computation capability. In CT- RSA 2016, Chakraborti et al. have demonstrated a generic INT-RUP attack on rate-1 block cipher based feedback type AE schemes. Their results inspire us to study about feedback type AE constructions at a reduced rate. In this paper, we consider two such recent designs, SAEB and TinyJAMBU and we analyze their integrity security in the setting of releasing unverified plaintext model. We found an INT-RUP attack on SAEB with roughly 232 decryption queries. However, the concrete analysis shows that if we reduce its rate to 32 bits, SAEB achieves the desired INT-RUP security bound without any additional overhead. Moreover, we have also analyzed TinyJAMBU, one of the finalists of the NIST LwC, and found it to be INT-RUP secure. To the best of our knowledge, this is the first work reporting the INT-RUP security analysis of the block cipher based single state, single pass, on-the-fly, inverse-free authenticated ciphers.
Expand
Nils Fleischhacker, Kasper Green Larsen, Mark Simkin
ePrint Report ePrint Report
We study the task of obliviously compressing a vector comprised of $n$ ciphertexts of size $\xi$ bits each, where at most $t$ of the corresponding plaintexts are non-zero. This problem commonly features in applications involving encrypted outsourced storages, such as searchable encryption or oblivious message retrieval. We present two new algorithms with provable worst-case guarantees, solving this problem by using only homomorphic additions and multiplications by constants. Both of our new constructions improve upon the state of the art asymptotically and concretely.

Our first construction, based on sparse polynomials, is perfectly correct and the first to achieve an asymptotically optimal dependency on $t$ in the compression rate by compressing the input vector into $\mathcal{O}(t \xi)$ bits. Compression can be performed homomorphically by performing $\mathcal{O}(n \log n)$ homomorphic additions and multiplications by constants. The main drawback of this construction is a decoding complexity of $\Omega(\sqrt{n})$.

Our second construction is based on a novel variant of invertible bloom lookup tables and is correct with probability $1-2^{-\kappa}$. It has a slightly worse compression rate compared to our first construction as it compresses the input vector into $\mathcal{O}(\xi\kappa t /\log t)$ bits, where $\kappa \geq \log t$. In exchange, both compression and decompression of this construction are highly efficient. The compression complexity is dominated by $\mathcal{O}(n \kappa)$ homomorphic additions and multiplications by constants. The decompression complexity is dominated by $\mathcal{O}(\xi\kappa t /\log t)$ decryption operations and equally many inversions of a pseudorandom permutation.
Expand

23 October 2022

Charles Bouillaguet
ePrint Report ePrint Report
This article gives improved algorithms to evaluate a multivariate Boolean polynomial over all the possible values of its input variables. Such a procedure is often used in cryptographic attacks against symmetric schemes. More precisely, we provide improved and simplified versions of the Fast Exhaustive Search algorithm presented at CHES'10 and of the space-efficient Moebius transform given by Dinur at EUROCRYPT'21. The new algorithms require $\mathcal{O}(d 2^n)$ operations with a degree-$d$ polynomial and operate in-place. We provide the full C code of a complete implementation under the form of a ``user-friendly'' library called BeanPolE, which we hope could be helpful to other cryptographers. This paper actually contains all the code, which is quite short.
Expand
David Balbás, Daniel Collins, Serge Vaudenay
ePrint Report ePrint Report
Many real-world group messaging systems delegate group administration to the application level, failing to provide formal guarantees related to group membership. Taking a cryptographic approach to group administration can prevent both implementation and protocol design pitfalls that result in a loss of confidentiality and consistency for group members.

In this work, we introduce a cryptographic framework for the design of group messaging protocols that offer strong security guarantees for group membership. To this end, we extend the continuous group key agreement (CGKA) paradigm used in the ongoing IETF MLS group messaging standardisation process and introduce the administrated CGKA (A-CGKA) primitive. Our primitive natively enables a subset of group members, the group admins, to control the addition and removal of parties and to update their own keying material in a secure manner. We embed A-CGKA with a novel correctness notion which provides guarantees for group evolution and consistency, and a security model that prevents even corrupted (non-admin) members from forging messages that add new users to a group. Moreover, we present two efficient and modular constructions of group administrators that are correct and secure with respect to our definitions. Finally, we propose, implement, and benchmark an efficient extension of MLS that integrates cryptographic administrators. Our constructions admit little overhead over running a CGKA and can be extended to support advanced admin functionalities.
Expand
Hauke Steffen, Georg Land, Lucie Kogelheide, Tim Güneysu
ePrint Report ePrint Report
The lattice-based CRYSTALS-Dilithium signature schemes has been selected for standardization by the NIST. As part of the selection process, a large number of implementations for platforms like x86, ARM Cortex-M4, or - on the hardware side - Xilinx Artix-7 have been presented and discussed by experts. Moreover, the software implementations have been subject to side-channel analysis with several attacks being published. Until now, however, an analysis of Dilithium hardware implementations and their peculiarities have not taken place. With this work, we aim to fill this gap, presenting an analysis of vulnerable operations and practically showing a successful profiled SPA and a CPA on a recent hardware implementation by Beckwith et al. Our SPA attack requires 700000 profiling traces and targets the first NTT stage. After profiling, we can find pairs of coefficients with 1101 traces. The CPA attack finds secret coefficients with as low as 66000 traces. Our attack emphasizes that noise-generation in hardware is not sufficient as mitigation measure for SCA. As a consequence, we present countermeasures and show that they effectively prevent both attacks.
Expand
Yael Tauman Kalai, Alex Lombardi, Vinod Vaikuntanathan
ePrint Report ePrint Report
We construct succinct non-interactive arguments (SNARGs) for bounded-depth computations assuming that the decisional Diffie-Hellman (DDH) problem is sub-exponentially hard. This is the first construction of such SNARGs from a Diffie-Hellman assumption. Our SNARG is also unambiguous: for every (true) statement $x$, it is computationally hard to find any accepting proof for $x$ other than the proof produced by the prescribed prover strategy.

We obtain our result by showing how to instantiate the Fiat-Shamir heuristic, under DDH, for a variant of the Goldwasser-Kalai-Rothblum (GKR) interactive proof system. Our new technical contributions are (1) giving a $TC^0$ circuit family for finding roots of cubic polynomials over a special family of characteristic $2$ fields (Healy-Viola, STACS '06) and (2) constructing a variant of the GKR protocol whose invocations of the sumcheck protocol (Lund-Fortnow-Karloff-Nisan, STOC '90) only involve degree $3$ polynomials over said fields. Along the way, since we can instantiate Fiat-Shamir for certain variants of the sumcheck protocol, we also show the existence of (sub-exponentially) computationally hard problems in the complexity class $\mathsf{PPAD}$, assuming the sub-exponential hardness of DDH. Previous $\mathsf{PPAD}$ hardness results all required either bilinear maps or the learning with errors assumption.
Expand
Pia Bauspieß, Tjerand Silde, Alexandre Tullot, Anamaria Costache, Christian Rathgeb, Jascha Kolberg, Christoph Busch
ePrint Report ePrint Report
Biometric data are uniquely suited for connecting individuals to their digital identities. Deriving cryptographic key exchange from successful biometric authentication therefore gives an additional layer of trust compared to password-authenticated key exchange. However, biometric data differ from passwords in two crucial points: firstly, they are sensitive personal data that need to be protected on a long-term basis. Secondly, efficient feature extraction and comparison components resulting in high intra-subject tolerance and low inter-subject distinguishability, documented with good biometric performance, need to be applied in order to prevent zero-effort impersonation attacks.

In this work, we present a protocol for secure and efficient biometrics-authenticated key exchange that fulfils the above requirements of biometric information protection compliant with ISO/IEC 24745. The protocol is based on established fuzzy vault schemes and validated with good recognition performance. We build our protocol from established primitives for password-authenticated key exchange using oblivious pseudo-random functions. Our protocol is independent of the biometric modality and can be instantiated both based on the security of discrete logarithms and lattices.

We provide an open-source implementation of our protocol instantiated with elliptic curves and a state-of-the art unlinkable fingerprint fuzzy vault scheme that is practical with transaction times of less than one second from the image capture to the completed key exchange.
Expand
Thibauld Feneuil, Matthieu Rivain
ePrint Report ePrint Report
The MPC-in-the-Head paradigm is a popular framework to build zero-knowledge proof systems using techniques from secure multi-party computation (MPC). While this paradigm is not restricted to a particular secret sharing scheme, all the efficient instantiations proposed so far rely on additive secret sharing.

In this work, we show how applying a threshold linear secret sharing scheme (threshold LSSS) can be beneficial to the MPC-in-the-Head paradigm. For a general MPC protocol model capturing most of the existing MPCitH schemes, we show that our approach improves the soundness of the underlying proof system from $1/N$ down to $1/\binom{N}{\ell}$, where $N$ is the number of parties and $\ell$ is the threshold of the sharing scheme. While very general, our technique is limited to a number of parties $N \leq |\mathbb{F}|$, where $\mathbb{F}$ is the field underlying the statement, because of the MDS conjecture.

Applying our approach with a low-threshold LSSS also boosts the performance of the proof system by making the MPC emulation cost independent of $N$ for both the prover and the verifier. The gain is particularly significant for the verification time which becomes logarithmic in $N$ (while the prover still has to generate and commit the $N$ input shares). We further generalize and improve our framework: we show how homomorphic commitments can get rid of the linear complexity of the prover, we generalize our result to any quasi-threshold LSSS, and we describe an efficient batching technique relying on Shamir's secret sharing. We finally apply our techniques to specific use-cases. We first propose a variant of the recent SDitH signature scheme achieving new interesting trade-offs. In particular, for a signature size of 10 KB, we obtain a verification time lower than $0.5$ ms, which is competitive with SPHINCS+, while achieving much faster signing. We further apply our batching technique to two different contexts: batched SDitH proofs and batched proofs for general arithmetic circuits based on the Limbo proof system. In both cases, we obtain an amortized proof size lower than $1/10$ of the baseline scheme when batching a few dozen statements, while the amortized performances are also significantly improved.
Expand
◄ Previous Next ►