International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

19 June 2023

Yibin Yang, Stanislav Peceny, David Heath, Vladimir Kolesnikov
ePrint Report ePrint Report
In MPC, we usually represent programs as circuits. This is a poor fit for programs that use complex control flow, as it is costly to compile control flow to circuits. This motivated prior work to emulate CPUs inside MPC. Emulated CPUs can run complex programs, but they introduce high overhead due to the need to evaluate not just the program, but also the machinery of the CPU, including fetching, decoding, and executing instructions, accessing RAM, etc.

Thus, both circuits and CPU emulation seem a poor fit for general MPC. The former cannot scale to arbitrary programs; the latter incurs high per-operation overhead.

We propose variable instruction set architectures (VISAs), an approach that inherits the best features of both circuits and CPU emulation. Unlike a CPU, a VISA machine repeatedly executes entire program fragments, not individual instructions. By considering larger building blocks, we avoid most of the machinery associated with CPU emulation: we directly handle each fragment as a circuit.

We instantiated a VISA machine via garbled circuits (GC), yielding constant-round 2PC for arbitrary assembly programs. We use improved branching (Stacked Garbling, Heath and Kolesnikov, Crypto 2020) and recent Garbled RAM (GRAM) (Heath et al., Eurocrypt 2022). Composing these securely and efficiently is intricate, and is one of our main contributions.

We implemented our approach and ran it on common programs, including Dijkstra's and Knuth-Morris-Pratt. Our 2PC VISA machine executes assembly instructions at $300$Hz to $4000$Hz, depending on the target program. We significantly outperform the state-of-the-art CPU-based approach (Wang et al., ESORICS 2016, whose tool we re-benchmarked on our setup). We run in constant rounds, use $6\times$ less bandwidth, and run more than $40\times$ faster on a low-latency network. With $50$ms (resp. $100$ms) latency, we are $898\times$ (resp. $1585\times$) faster on the same setup.

While our focus is MPC, the VISA model also benefits CPU-emulation-based Zero-Knowledge proof compilers, such as ZEE and EZEE (Heath et al., Oakland'21 and Yang et al., EuroS&P'22).
Expand
Zvika Brakerski, Stav Medina
ePrint Report ePrint Report
This work addresses the long quest for proving full (adaptive) security for attribute-based encryption (ABE). We show that in order to prove full security in a black-box manner, the scheme must be ``irregular'' in the sense that it is impossible to ``validate'' secret keys to ascertain consistent decryption of ciphertexts. This extends a result of Lewko and Waters (Eurocrypt 2014) that was only applicable to straight-line proofs (without rewinding). Our work, therefore, establishes that it is impossible to circumvent the irregularity property using creative proof techniques, so long as the adversary is used in a black-box manner.

As a consequence, our work provides an explanation as to why some lattice-based ABE schemes cannot be proven fully secure, even though no known adaptive attacks exist.
Expand
Huayi Qi, Minghui Xu, Xiuzhen Cheng, Weifeng Lyu
ePrint Report ePrint Report
Blockchain systems can become overwhelmed by a large number of transactions, leading to increased latency. As a consequence, latency-sensitive users must bid against each other and pay higher fees to ensure that their transactions are processed in priority. However, most of the time of a blockchain system (78% in Ethereum), there is still a lot of unused computational power, with few users sending transactions. To address this issue and reduce latency for users, we propose the latency-first smart contract model in this paper, which optimistically accepts committed transactions. This allows users to submit a commitment during times of high demand, and then complete the rest of the work at a lower priority. From the perspective of the blockchain, this temporarily “overclocks” the system. We have developed a programming tool for our model, and our experiments show that the proposed latency-first smart contract model can greatly reduce latency during the periods of high demand.
Expand
Alain Couvreur, Rocco Mora, Jean-Pierre Tillich
ePrint Report ePrint Report
We bring in here a novel algebraic approach for attacking the McEliece cryptosystem which is right now at the $4$-th round of the NIST competition. It consists in introducing a subspace of matrices representing quadratic forms. Those are associated with quadratic relationships for the component-wise product in the dual of the Goppa (or alternant) code of the cryptosystem. Depending on the characteristic of the field over which the code is defined, this space of matrices consists only of symmetric matrices (odd characteristic) or skew-symmetric matrices (characteristic $2$). It turns out that this matrix space contains unusually low-rank matrices (rank $3$ matrices in odd characteristic, rank $2$ matrices for characteristic $2$) which reveal the secret polynomial structure of the code. Finding such matrices can then be used to recover the secret key of the scheme. We devise a dedicated approach in characteristic $2$ consisting in using a Gröbner basis modeling that a skew-symmetric matrix is of rank $2$. This allows to analyze the complexity of solving the corresponding algebraic system with Gröbner bases techniques. This computation behaves differently when applied to the skew-symmetric matrix space associated with a random code rather than with a Goppa or an alternant code. This gives a distinguisher of the latter code family. We give a bound on its complexity which turns out to interpolate nicely between polynomial and exponential depending on the code parameters. A distinguisher for alternant/Goppa codes was already known [FGOPT11]. It is of polynomial complexity but works only in a narrow parameter regime. This new distinguisher is also polynomial for the parameter regime necessary for [FGOPT11] but contrarily to the previous one is able to operate for virtually all code parameters relevant to cryptography. Moreover, we use this matrix space to find a polynomial time attack of the McEliece cryptosystem provided that the Goppa code is distinguishable by the method of [FGOPT11] and its degree is less than $q-1$, where $q$ is the alphabet size of the code.
Expand
Susil Kumar Bishoi
ePrint Report ePrint Report
The word-oriented feedback shift registers (WFSRs) possess very attractive properties as they take advantage of modern word-based processors and thus increase the throughput. We provide a generalized form of the feedback function of WFSR along with some special cases. Then, a necessary and sufficient condition for nonsingular WFSR is discussed. We study different word-based cascade systems and the period of sequences produced by these cascade systems is derived. We provide experimental results on avalanche property on states of cascade systems and statistical results of sequences produced by them. Finally, we present a crypt-analytic attack on cascade systems and suggest its countermeasure.
Expand
Subhadeep Banik, Francesco Regazzoni
ePrint Report ePrint Report
Möbius transform is a linear circuit used to compute the evaluations of a Boolean function over all points on its input domain. The operation is very useful in finding the solution of a system of polynomial equations over GF (2) for obvious reasons. However the operation, although linear, needs exponential number of logic operations (around $n\cdot 2^{n-1}$ bit xors) for an $n$-variable Boolean function. As such the only known hardware circuit to efficiently compute the Möbius Transform requires silicon area that is exponential in $n$. For Boolean functions whose algebraic degree is bound by some parameter $d$, recursive definitions of the Möbius Transform exist that requires only $O(n^{d+1})$ space in software. However converting the mathematical definition of this space-efficient algorithm into a hardware architecture is a non-trivial task, primarily because the recursion calls notionally lead to a depth-first search in a transition graph that require context switches at each recursion call for which straightforward mapping in hardware is difficult. In this paper we look to overcome these very challenges in an engineering sense. We propose a space efficient sequential hardware circuit for the Möbius Transform that requires only polynomial circuit area (i.e. $O(n^{d+1})$) provided the algebraic degree of the Boolean function is limited to $d$. We show that how this circuit can be used as a component to efficiently solve polynomial equations of degree at most $d$ by using fast exhaustive search. We propose three different circuit architectures for this, each of which used the Möbius Transform circuit as a core component. We show that asymptotically, all the solutions of a system of $m$ polynomials in $n$ unknowns and algebraic degree $d$ over GF (2) can be found using a circuit of silicon area proportional to $m \cdot n^{d+1}$ and physical time proportional to $2\cdot\log_2(n-d)\cdot 2^{n-d}$.
Expand
Joel Gärtner
ePrint Report ePrint Report
A famous reduction by Regev shows that random instances of the Learning With Errors (LWE) problem are asymptotically at least as hard as a worst-case lattice problem. As such, by assuming that standard lattice problems are hard to solve, the asymptotic security of cryptosystems based on the LWE problem is guaranteed. However, it has not been clear to which extent, if any, this reduction provides support for the security of present concrete parametrizations.

In this work we therefore use Regev's reduction to parametrize a cryptosystem, providing a reference as to what parameters are required to actually claim security from this reduction. This requires us to account for the concrete performance of this reduction, allowing the first parametrization of a cryptosystem that is provably secure based only on a conservative hardness estimate for a standard lattice problem. Even though we attempt to optimize the reduction, our system still requires significantly larger parameters than typical LWE-based cryptosystems, highlighting the significant gap between parameters that are used in practice and those for which worst-case reductions actually are applicable.
Expand
Nils Fleischhacker, Kasper Green Larsen, Mark Simkin
ePrint Report ePrint Report
$\newcommand{\gen}{\mathsf{Gen}}\newcommand{\enc}{\mathsf{Enc}}\newcommand{\dec}{\mathsf{Dec}}$ A recent work of Fleischhacker, Larsen, and Simkin (Eurocrypt 2023) shows how to efficiently compress encrypted sparse vectors. Subsequently, Fleischhacker, Larsen, Obremski, and Simkin (Eprint 2023) improve upon their work and provide more efficient constructions solving the same problem. Being able to efficiently compress such vectors is very useful in a variety of applications, such as private information retrieval, searchable encryption, and oblivious message retrieval. Concretely, assume one is given a vector $(m_1, \dots, m_n)$ with at most $t$ distinct indices $i \in [n]$, such that $m_i \neq 0$ and assume $(\gen,\enc, \dec)$ is an additively homomorphic encryption scheme. The authors show that one can compress $(\enc(k, m_1), \dots, \enc(k, m_n))$, without knowing the secret key $k$, into a digest with size dependent on the upper bound on the sparsity $t$, but not on the total vector length $n$. Unfortunately, all existing constructions either only work for encryption schemes that have sufficiently large plaintext spaces or require additional encrypted auxiliary information about the plaintext vector.

In this work, we show how to generalize the results of Fleischhacker, Larsen, and Simkin to encryption schemes with arbitrarily small plaintext spaces. Our construction follows the same general ideas laid out in previous works but modifies them in a way that allows compressing the encrypted vectors correctly with high probability, independently of the size of the plaintext space.
Expand
Lukáš Folwarczný, Mika Göös, Pavel Hubáček, Gilbert Maystre, Weiqiang Yuan
ePrint Report ePrint Report
Simon (1998) proved that it is impossible to construct collision-resistant hash functions from one-way functions using a black-box reduction. It is conjectured more generally that one-way functions do not imply, via a black-box reduction, the hardness of any total NP search problem (collision-resistant hash functions being just one such example). We make progress towards this conjecture by ruling out a large class of “single-query” reductions.

In particular, we improve over the prior work of Hubáček et al. (2020) in two ways: our result is established via a novel simpler combinatorial technique and applies to a broader class of semi black-box reductions.
Expand
Pierre-Antoine Tissot, Lilian Bossuet, Vincent Grosso
ePrint Report ePrint Report
Persistent fault analysis is a novel and efficient cryptanalysis method. The persistent fault attacks take advantage of a persistent fault injected in a non-volatile memory, then present on the device until the reboot of the device. Contrary to classical physical fault injection, where differential analysis can be performed, persistent fault analysis requires new analyses and dedicated countermeasures. Persistent fault analysis requires a persistent fault injected in the S-box such that the bijective characteristic of the permutation function is not present anymore. In particular, the analysis will use the non-uniform distribution of the S-box values: when one of the possible S-box values never appears and one of the possible S-box values appears twice. In this paper, we present the first dedicated protection to prevent persistent fault analysis. This countermeasure, called BALoo for Bijection Assert with Loops, checks the property of bijectivity of the S-box. We show that this countermeasure has a 100% fault coverage for the persistent fault analysis, with a very small software overhead (memory overhead) and reasonable hardware overhead (logical resources, memory and performance). To evaluate the overhead of BALoo, we provide experimental results obtained with the software and the hardware (FPGA) implementations of an AES-128.
Expand
James Hsin-yu Chiang, Bernardo David, Mariana Gama, Christian Janos Lebeda
ePrint Report ePrint Report
In the classical setting of differential privacy, a privacy-preserving query is performed on a private database, after which the query result is released to the analyst; a differentially private query ensures that the presence of a single database entry is protected from the analyst’s view. In this work, we contribute the first definitional framework for differential privacy in the trusted curator setting; clients submit private inputs to the trusted curator which then computes individual outputs privately returned to each client. The adversary is more powerful than the standard setting; it can corrupt up to n − 1 clients and subsequently decide inputs and learn outputs of corrupted parties. In this setting, the adversary also obtains leakage from the honest output that is correlated with a corrupted output. Standard differentially private mechanisms protect client inputs but do not mitigate output correlation leaking arbitrary client information, which can forfeit client privacy completely. We initiate the investigation of a novel notion of correlated-output-differential-privacy to bound the leakage from output correlation in the trusted curator setting. We define the satisfaction of both standard and correlated-output differential privacy as round-differential-privacy and highlight the relevance of this novel privacy notion to all application domains in the trusted curator model.

We explore round-differential-privacy in traditional “dark pool” market venues, which promise privacy-preserving trade execution to mitigate front-running; privately submitted trade orders and trade execution are kept private by the trusted venue operator. We observe that dark pools satisfy neither classic nor correlated-output differential privacy; in markets with low trade activity, the adversary may trivially observe recurring, honest trading patterns, and anticipate and front-run future trades. In response, we present the first round-differentially-private market mechanisms that formally mitigate information leakage from all trading activity of a user. This is achieved with fuzzy order matching, inspired by the standard randomized response mechanism; however, this also introduces a liquidity mismatch as buy and sell orders are not guaranteed to execute pairwise, thereby weakening output correlation; this mismatch is compensated for by a round-differentially-private liquidity provider mechanism, which freezes a noisy amount of assets from the liquidity provider for the duration of a privacy epoch, but leaves trader balances unaffected. We propose oblivious algorithms for realizing our proposed market mechanisms with secure multi-party computation (MPC) and implement these in the Scale-Mamba Framework using Shamir Secret Sharing based MPC. We demonstrate practical, round-differentially-private trading with comparable throughput as prior work implementing (traditional) dark pool algorithms in MPC; our experiments demonstrate practicality for both traditional finance and decentralized finance settings.
Expand
Brett Hemenway Falk, Daniel Noble, Tal Rabin
ePrint Report ePrint Report
This paper presents the first protocols for Proactive Secret Sharing (PSS) that only require constant (in the number of parties, $n$) communication per party per epoch. By harnessing the power of expander graphs, we are able to obtain strong guarantees about the security of the system. We present the following PSS protocols: – A PSS protocol that provides privacy (but no robustness) against an adversary controlling $O(n)$ parties per epoch. – A PSS protocol that provides robustness (but no privacy) against an adversary controlling $O(n)$ parties per epoch. – A PSS protocol that provides privacy against an adversary controlling $O(n^{a})$ parties per epoch and provides robustness against an adversary controlling $O(n^{1−a})$ parties per epoch, for any constant $0 \leq a \leq 1$. Instantiating this with $a = \frac{1}{2}$ gives a PSS protocol that is proactively secure (private and robust) against an adversary controlling $O(\sqrt{n})$ parties per epoch. Additionally, we discuss how secure channels, whose existence is usually assumed by PSS protocols, are challenging to create in the mobile adversary setting, and we present a method to instantiate them from a weaker assumption.
Expand
Shweta Agrawal, Melissa Rossi, Anshu Yadav, Shota Yamada
ePrint Report ePrint Report
Constructing advanced cryptographic primitives such as obfuscation or broadcast encryption from standard hardness assumptions in the post quantum regime is an important area of research, which has met with limited success despite significant effort. It is therefore extremely important to find new, simple to state assumptions in this regime which can be used to fill this gap. An important step was taken recently by Wee (Eurocrypt '22) who identified two new assumptions from lattices, namely evasive ${\sf LWE}$ and tensor ${\sf LWE}$, and used these to construct broadcast encryption and ciphertext policy attribute based encryption for ${\sf P}$ with optimal parameters. Independently, Tsabary formulated a similar assumption and used it to construct witness encryption (Crypto '22). Following Wee's work, Vaikuntanathan, Wee and Wichs independently provided a construction of witness encryption (Asiacrypt '22).

In this work, we advance this line of research by providing the first construction of multi-input attribute based encryption (${\sf MIABE}$) for the function class ${\sf NC_1}$ for any constant arity from evasive ${\sf LWE}$. Our construction can be extended to support the function class ${\sf P}$ by using evasive and a suitable strengthening of tensor ${\sf LWE}$. In more detail, our construction supports $k$ encryptors, for any constant $k$, where each encryptor uses the master secret key ${\sf msk}$ to encode its input $(\mathbf{x}_i, m_i)$, the key generator computes a key ${\sf sk}_f$ for a function $f \in {\sf NC}_1$ and the decryptor can recover $(m_1,\ldots,m_k)$ if and only if $f(\mathbf{x}_1,\ldots,\mathbf{x}_k)=1$. The only known construction for ${\sf MIABE}$ for ${\sf NC}_1$ by Agrawal, Yadav and Yamada (Crypto '22) supports arity $2$ and relies on pairings in the generic group model (or with a non-standard knowledge assumption) in addition to ${\sf LWE}$. Furthermore, it is completely unclear how to go beyond arity $2$ using this approach due to the reliance on pairings.

Using a compiler from Agrawal, Yadav and Yamada (Crypto '22), our ${\sf MIABE}$ can be upgraded to multi-input predicate encryption for the same arity and function class. Thus, we obtain the first constructions for constant-arity predicate and attribute based encryption for a generalized class such as ${\sf NC}_1$ or ${\sf P}$ from simple assumptions that may be conjectured post-quantum secure. Along the way, we show that the tensor ${\sf LWE}$ assumption can be reduced to standard ${\sf LWE}$ in an important special case which was not known before. This adds confidence to the plausibility of the assumption and may be of wider interest.
Expand
Daniel J. Bernstein, Tung Chou
ePrint Report ePrint Report
Quantitative analyses of the costs of cryptographic attack algorithms play a central role in comparing cryptosystems, guiding the search for improved attacks, and deciding which cryptosystems to standardize. Unfortunately, these analyses often turn out to be wrong.

Formally verifying complete proofs of attack performance is a natural response but crashes into an insurmountable structural problem: there are large gaps between the best proven cost among known attack algorithms and the best conjectured cost among known attack algorithms. Ignoring conjectured speedups would be a security disaster.

This paper presents a case study demonstrating the feasibility and value of successfully formalizing what state-of-the-art attack analyses actually do. The input to this formalization is not a proof, and the output is not a formally verified proof; the formalization process nevertheless enforces clear definitions, systematically accounts for all algorithm steps, simplifies review, improves reproducibility, and reduces the risk of error.

Concretely, this paper's CryptAttackTester (CAT) software includes formal specifications of (1) a general-purpose model of computation and cost metric, (2) various attack algorithms, and (3) formulas predicting the cost and success probability of each algorithm. The software includes general-purpose simulators that systematically compare the predictions to the observed attack behavior in the same model. The paper gives various examples of errors in the literature that survived typical informal testing practices and that would have been immediately caught if CAT-enforced links had been in place.

The case study in CAT is information-set decoding (ISD), the top attack strategy against the McEliece cryptosystem. CAT formalizes analyses of many ISD algorithms, covering interactions between (1) high-level search strategies from Prange, Lee–Brickell, Leon, Stern, Dumer, May–Meurer–Thomae, and Becker–Joux–May–Meurer; (2) random walks from Omura, Canteaut–Chabaud, Canteaut–Sendrier, and Bernstein–Lange–Peters; and (3) speedups in core subroutines such as linear algebra and sorting.
Expand
Renaud Dubois
ePrint Report ePrint Report
Account Abstraction is a powerful feature that will transform today Web3 onboarding UX. This notes describes an EVM (Ethereum Virtual Machine) implementation of the well known secp256r1 curves optimized for the specificities of the EVM environment. Our optimizations rely on EVM dedicated XYZZ elliptic coordinates system, hacked precomputations, and assembly tricks to cut from more than 1M to 200K/62K (with or withoutprecomputations)
Expand
Zeta Avarikioti, Stefan Schmid, Samarth Tiwari
ePrint Report ePrint Report
In this work, we revisit the severely limited throughput problem of cryptocurrencies and propose a novel rebalancing approach for Payment Channel Networks (PCNs). PCNs are a popular solution for increasing the blockchain throughput, however, their benefit depends on the overall users’ liquidity. Rebalancing mechanisms are the state-of-the-art approach to maintaining high liquidity in PCNs. However, existing opt-in rebalancing mechanisms exclude users that may assist in rebalancing for small service fees, leading to suboptimal solutions and under-utilization of the PCNs’ bounded liquidity.

We introduce the first rebalancing approach for PCNs that includes all users, following an “all for one and one for all” design philosophy that yields optimal throughput. The proposed approach introduces a double-auction rebalancing problem, which we term Musketeer, where users can participate as buyers (paying fees to rebalance) or sellers (charging fees to route transactions). The desired properties tailored to the unique characteristics of PCNs are formally defined, including the novel property of cyclic budget balance that is a stronger variation of strong budget balance. Basic results derived from auction theory, including an impossibility and multiple mechanisms that either achieve all desiderata under a relaxed model or sacrifice one of the properties, are presented. We also propose a novel mechanism that leverages time delays as an additional cost to users. This mechanism is provably truthful, cyclic budget balanced, individually rational, and economic efficient but only with respect to liquidity.
Expand
Sam Widlund
ePrint Report ePrint Report
First, the WESP encryption algorithm is defined. Encryptions are created by a system of equations in which the equations are generated using the values of tables that act as the encryption key. Next, it is shown that if the encryption tables are not known, the equations in the system of equations cannot be known, and it is demonstrated that the system of equations cannot be solved if the equations are not known, and thus the encryption cannot be broken in a closed form. Then, we calculate for all symbols used in the algorithm, the minimum amount of trials needed, in order to be able to verity the trials. Since the algorithm is constantly updating key values the verifying becomes impossible if equations are not evaluated in order. The calculation shows that the minimum number of trials required is such that the number of trials, i.e., the time required to break the encryption, increases exponentially as the key size grows. Since there is no upper limit on the key size there is neither any upper limit on the time it requires to break the encryption. It is also shown that the well-known mathematical "P vs NP" problem is solved by the presented proof. Since there is at least one algorithm (WESP) that is NP (verifiable in polynomial time) but not P (solvable in polynomial time), P cannot be the same set as NP.
Expand
Mohammad Vaziri, Vesselin Velichkov
ePrint Report ePrint Report
Since the announcement of the NIST call for a new lightweight cryptographic standard, a lot of schemes have been proposed in response. Xoodyak is one of these schemes and is among the finalists of the NIST competition with a sponge structure very similar to the Keccak hash function – the winner of the SHA3 NIST competition. In this paper with conditional cube attack technique, we fully recover the key of Xoodyak reduced to 6 and 7 rounds with time complexity resp. 2^{42.58} and 2^{76.003} in the nonce-reusing scenario. In our attack setting, we import the cube variables in the absorbing associated data phase, which has higher degree of freedom in comparison to data absorption phase. We use MILP tool for finding enough cube variables to perform the conditional key recovery attack. The 6-round attack is practical and has been implemented. To the best of our knowledge, this is the first proposed attack on 7-round Xoodyak.
Expand
Vincent Meyers, Dennis R. E. Gnad, Nguyen Minh Dang, Falk Schellenberg, Amir Moradi, Mehdi B. Tahoori
ePrint Report ePrint Report
FPGAs have been used in the cloud since several years, as accelerators for various workloads such as machine learning, database processes and security tasks. As for other cloud services, a highly desired feature is virtualization in which multiple tenants can share a single FPGA to increase utilization and by that efficiency. By solely using standard FPGA logic in the untrusted tenant, on-chip logic sensors allow remote power analysis side-channel and covert channel attacks on the victim tenant. However, such sensors are implemented by unusual circuit constructions, such as ring oscillators, delay lines, or unusual interconnect configuration, which might be easily detected by bitstream and/or netlist checking. In this paper, we show that such structural checking methods are not universal solutions as the attacks can make use of any normal circuits, which mean they are ``benign-looking'' to any checking method. We indeed demonstrate that -- without any additional and suspicious implementation constraints -- standard circuits intended for legitimate tasks can be misused as a sensor thereby monitoring instantaneous power consumption, and hence conducting key-recovery attacks. This extremely stealthy attack is a threat that can originate from the application layers, i.e. through various high-level synthesis approaches.
Expand
Jesús García-Rodríguez, Stephan Krenn, Daniel Slamanig
ePrint Report ePrint Report
Anonymous or attribute-based credential (ABC) systems are a versatile and important cryptographic tool to achieve strong access control guarantees while simultaneously respecting the privacy of individuals. A major problem in the practical adoption of ABCs is their transferability, i.e., such credentials can easily be duplicated, shared or lent. One way to counter this problem is to tie ABCs to biometric features of the credential holder and to require biometric verification on every use. While this is certainly not a viable solution for all ABC use-cases, there are relevant and timely use-cases, such as vaccination credentials as widely deployed during the COVID-19 pandemic. In such settings, ABCs that are tied to biometrics, which we call Biometric-Bound Attribute-Based Credentials (bb-ABC), allow to implement scalable and privacy-friendly systems to control physical access to (critical) infrastructure and facilities.

While there are some previous works on bb-ABC in the literature, the state of affairs is not satisfactory. Firstly, in existing work the problem is treated in a very abstract way when it comes to the actual type of biometrics. Thus, it does not provide concrete solutions which allow for assessing their practicality when deployed in a real-world setting. Secondly, there is no formal model which rigorously captures bb-ABC systems and their security requirements, making it hard to assess their security guarantees. With this work we overcome these limitations and provide a rigorous formalization of bb-ABC systems. Moreover, we introduce two generic constructions which offer different trade-offs between efficiency and trust assumptions, and provide benchmarks from a concrete instantiation of such a system using facial biometrics. The latter represents a contact-less biometric feature that provides acceptable accuracy and seems particularly suitable to the above use-case.
Expand
◄ Previous Next ►