International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

05 June 2017

Joost Renes, Benjamin Smith
ePrint Report ePrint Report
qDSA is a high-speed, high-security signature scheme that facilitates implementations with a very small memory footprint, a crucial requirement for embedded systems and IoT devices, and that uses the same public keys as modern Diffie--Hellman schemes based on Montgomery curves (such as Curve25519) or Kummer surfaces. qDSA resembles an adaptation of EdDSA to the world of Kummer varieties, which are quotients of groups by \(\pm1\). Interestingly, it does not require any full group operations or point recovery: all computations, including signature verification, occur on the quotient where there is no group law. We include details on four implementations of qDSA, using Montgomery and fast Kummer arithmetic on the 8-bit AVR ATmega and 32-bit Cortex~M0 platforms. We find that qDSA significantly outperforms state-of-the-art signature implementations in terms of stack usage and code size. We also include a compression algorithm for points on fast Kummer surfaces, reducing them to the same size as compressed elliptic curve points for the same security level.
Expand
Jacqueline Brendel, Marc Fischlin, Felix Günther, Christian Janson
ePrint Report ePrint Report
The pseudorandom-function oracle-Diffie–Hellman (PRF-ODH) assumption has been introduced recently to analyze a variety of DH-based key exchange protocols, including TLS 1.2 and the TLS 1.3 candidates, as well as the extended access control (EAC) protocol. Remarkably, the assumption comes in different flavors in these settings and none of them has been scrutinized comprehensively yet. In this paper here we therefore present a systematic study of the different PRF-ODH variants in the literature. In particular, we analyze their strengths relative to each other, carving out that the variants form a hierarchy. We further investigate the boundaries between instantiating the assumptions in the standard model and the random oracle model. While we show that even the strongest variant is achievable in the random oracle model under the strong Diffie–Hellman assumption, we provide a negative result showing that it is implausible to instantiate even the weaker variants in the standard model via algebraic black-box reductions to common cryptographic problems.
Expand
Claude Carlet
ePrint Report ePrint Report
For every positive integers $n$, $m$ and every even positive integer $\delta$, we derive inequalities satisfied by the Walsh transforms of all vectorial $(n,m)$-functions and prove that the case of equality characterizes differential $\delta$-uniformity. This provides a generalization to all differentially $\delta$-uniform functions of the characterization of APN $(n,n)$-functions due to Chabaud and Vaudenay, by means of the fourth moment of the Walsh transform. Such generalization has been missing since the introduction of the notion of differential uniformity by Nyberg in 1994 and since Chabaud-Vaudenay's result the same year.\\ For each even $\delta\geq 2$, we find several such characterizations. In particular, when $\delta=2$ and $\delta=4$, we have that, for any $(n,n)$-function (resp. any $(n,n-1)$-function), the arithmetic mean of $W_F^2(u_1,v_1)W_F^2(u_2,v_2)W_F^2(u_1+u_2,v_1+v_2)$ when $u_1,u_2$ range independently over ${\Bbb F}_2^n$ and $v_1,v_2$ are nonzero and distinct and range independently over ${\Bbb F}_2^m$, is at least $2^{3n}$, and that $F$ is APN (resp. is differentially 4-uniform) if and only if this arithmetic mean equals $2^{3n}$ (which is the value we would get with a bent function if such function could exist). These inequalities give more knowledge on the Walsh spectrum of $(n,m)$-functions. We deduce in particular a property of the Walsh support of highly nonlinear functions. We also consider the completely open question of knowing if the nonlinearity of APN functions is necessarily non-weak (as it is the case for known APN functions); we prove new lower bounds which cover all power APN functions (and hence a large part of known APN functions), which explain why their nonlinearities are rather good, and we discuss the question of the nonlinearity of APN quadratic functions (since almost all other known APN functions are quadratic).
Expand
Zahra Jafargholi, Chethan Kamath, Karen Klein, Ilan Komargodski, Krzysztof Pietrzak, Daniel Wichs
ePrint Report ePrint Report
For many cryptographic primitives, it is relatively easy to achieve selective security (where the adversary commits a-priori to some of the choices to be made later in the attack) but appears difficult to achieve the more natural notion of adaptive security (where the adversary can make all choices on the go as the attack progresses). A series of several recent works shows how to cleverly achieve adaptive security in several such scenarios including generalized selective decryption (Panjwani, TCC '07 and Fuchsbauer et al., CRYPTO '15), constrained PRFs (Fuchsbauer et al., ASIACRYPT '14), and Yao garbled circuits (Jafargholi and Wichs, TCC '16b). Although the above works expressed vague intuition that they share a common technique, the connection was never made precise. In this work we present a new framework that connects all of these works and allows us to present them in a unified and simplified fashion. Moreover, we use the framework to derive a new result for adaptively secure secret sharing over access structures defined via monotone circuits. We envision that further applications will follow in the future. Underlying our framework is the following simple idea. It is well known that selective security, where the adversary commits to $n$-bits of information about his future choices, automatically implies adaptive security at the cost of amplifying the adversary's advantage by a factor of up to $2^n$. However, in some cases the proof of selective security proceeds via a sequence of hybrids, where each pair of adjacent hybrids locally only requires some smaller partial information consisting of $m \ll n$ bits. The partial information needed might be completely different between different pairs of hybrids, and if we look across all the hybrids we might rely on the entire $n$-bit commitment. Nevertheless, the above is sufficient to prove adaptive security, at the cost of amplifying the adversary's advantage by a factor of only $2^m \ll 2^n$. In all of our examples using the above framework, the different hybrids are captured by some sort of a graph pebbling game and the amount of information that the adversary needs to commit to in each pair of hybrids is bounded by the maximum number of pebbles in play at any point in time. Therefore, coming up with better strategies for proving adaptive security translates to various pebbling strategies for different types of graphs.
Expand
Philippe Gaborit, Duong Hieu Phan, Adrien Hauteville, Jean-Pierre Tillich
ePrint Report ePrint Report
Code-based cryptography has a long history, almost as long as the history of public-key encryption (PKE). While we can construct almost all primitives from codes such as PKE, signature, group signature etc, it is a long standing open problem to construct an identity-based encryption from codes. We solve this problem by relying on codes with rank metric. The concept of identity-based encryption (IBE), introduced by Shamir in 1984, allows the use of users’ identifier information such as email as public key for encryption. There are two problems that makes the design of IBE extremely hard: the requirement that the public key can be an arbitrary string and the possibility to extract decryption keys from the public keys. In fact, it took nearly twenty years for the problem of designing an efficient method to implement an IBE to be solved. The known methods of designing IBE are based on different tools: from elliptic curve pairings by Sakai, Ohgishi and Kasahara and by Boneh and Franklin in 2000 and 2001 respectively; from the quadratic residue problem by Cocks in 2001; and finally from the Learning-with-Error problem by Gentry, Peikert, and Vaikuntanathan in 2008. Among all candidates for post-quantum cryptography, there only exist thus lattice-based IBE. In this paper, we propose a new method, based on the hardness of learning problems with rank metric, to design the first code-based IBE scheme. In order to overcome the two above problems in designing an IBE scheme, we first construct a rank-based PKE, called RankPKE, where the public key space is dense and thus can be obtained from a hash of any identity. We then extract a decryption key from any public key by constructing an trapdoor function which relies on RankSign - a signature scheme from PQCrypto 2014. In order to prove the security of our schemes, we introduced a new prob- lem for rank metric: the Rank Support Learning problem (RSL). A high technical contribution of the paper is devoted to study in details the hardness of the RSL problem.
Expand
Patrick Holzer, Thomas Wunderer
ePrint Report ePrint Report
Several recent cryptographic constructions - including a public key encryption scheme, a fully homomorphic encryption scheme, and a candidate multilinear map construction - rely on the hardness of the short generator principal ideal problem (SG-PIP): given a Z-basis of some principal (fractional) ideal in an algebraic number field that is guaranteed to have an exceptionally short generator with respect to the logarithmic embedding, find a shortest generator of the principal ideal. The folklore approach to solve this problem is to split it into two subproblems. First, recover some arbitrary generator of the ideal, which is known as the principal ideal problem (PIP). Second, solve a bounded distance decoding (BDD) problem in the log-unit lattice to transform this arbitrary generator into a shortest generator of the ideal. The first problem, i.e., solving the PIP, is known to be solvable in polynomial time on quantum computers for arbitrary number fields under the generalized Riemann hypothesis due to Biasse and Song. Cramer, Ducas, Peikert, and Regev showed, based on the work of Campbell, Groves, and Shepherd, that the second problem can be solved in polynomial time on classical computers for cyclotomic number fields of prime-power conductor.

In this work, we extend the work of Cramer, Ducas, Peikert, and Regev to cyclotomic number fields $K=\Q(\xi_m)$ of conductor $m=p^\alpha q^\beta$, where $p,q$ are distinct odd primes. In more detail, we show that the second problem can be solved in classical polynomial time (with quantum polynomial time precomputation) under some sufficient conditions, if $(p,q)$ is an $(\alpha, \beta)$-generator prime pair, a new notion introduced in this work. We further provide experimental evidence that suggests that roughly 35 percent of all prime pairs are $(\alpha, \beta)$-generator prime pairs for all $\alpha$ and $\beta$. Combined with the work of Biasse and Song our results show that under sufficient conditions the SG-PIP can be solved in quantum polynomial time in cyclotomic number fields of composite conductor of the form $p^\alpha q^\beta$.
Expand
Dr. M. AMUTHA PRABAKAR, Dr. B. INDRANI, M. KARTHIGAI VENI
ePrint Report ePrint Report
Nowadays, IT enabled service gain more attention due to easy to access resources from remote place. IT enabled services are extend their service to all kind of business and personal related applications like, e-commerce, e-business, e-transactions and e-healthcare etc.,. In India, e-healthcare system gains more attention in recent years due to its effectiveness. We have to consider information assurance is an important part of e-healthcare system, because maintaining of sensitive health records of individuals. Any information system is subject to two different issues, 1) information handling and 2) information assurance. An e-healthcare system has to provide necessary security factors without compromising information loss. Information access is one of the foremost issue for providing access rights to the legal users. In this paper, we have proposed a two factor authentication scheme using Elliptic Curve Cryptography with smart card. The proposed authentication is based on two-factor authentication with smart card and password, which provides high security with minimum computational cost. The proposed scheme generates new session key for every new session with fresh time stamp and nonce value. The proposed scheme needs minimum computation cost compared with the related authentication schemes using smart card
Expand

02 June 2017

Alex Biryukov, Leo Perrin
ePrint Report ePrint Report
Lightweight cryptography has been one of the ``hot topics'' in symmetric cryptography in the recent years. A huge number of lightweight algorithms have been published, standardized and/or used in commercial products.

In this paper, we discuss the different implementation constraints that a ``lightweight'' algorithm is usually designed to satisfy. We also present an extensive survey of all lightweight symmetric primitives we are aware of. It covers designs from the academic community, from government agencies and proprietary algorithms which were reverse-engineered or leaked. Relevant national (NIST...) and international (ISO/IEC}...) standards are listed. We then discuss some trends we identified in the design of lightweight algorithms, namely the designers' preference for ARX-based and bitsliced-S-Box-based designs and simple key schedules.

Finally, we argue that lightweight cryptography is too large a field and that it should be split into two related but distinct areas: ultra-lightweight and IoT cryptography. The former deals only with the smallest of devices for which a lower security level may be justified by the very harsh design constraints. The latter corresponds to low-power embedded processors for which the AES and modern hash function are costly but which have to provide a high level security due to their greater connectivity.
Expand
Alexandra Boldyreva, Christopher Patton, Thomas Shrimpton
ePrint Report ePrint Report
Hedged PKE schemes are designed to provide useful security when the permessage randomness fails to be uniform, say, due to faulty implementations or adversarial actions. A simple and elegant theoretical approach to building such schemes works like this: Synthesize fresh random bits by hashing all of the encryption inputs, and use the resulting hash output as randomness for an underlying PKE scheme. The idea actually goes back to the Fujisaki-Okamoto transform for turning CPA-secure encryption into CCA-secure encryption, and is also used to build deterministic PKE schemes.

In practice, implementing this simple construction is surprisingly difficult, as the high- and mid-level APIs presented by the most commonly used crypto libraries (e.g. OpenSSL and forks thereof) do not permit one to specify the per-encryption randomness. Thus application developers are forced to piece together low-level functionalities and attend to any associated, security-critical algorithmic choices. Other approaches to hedged PKE present similar problems in practice.

We reconsider the matter of building hedged PKE schemes, and the security notions they aim to achieve. We lift the current best-possible security notion for hedged PKE (IND-CDA) from the CPA setting to the CCA setting, and then show how to achieve it using primitives that are readily available from high-level APIs. We also propose a new security notion, MM-CCA, which generalizes traditional IND-CCA to admit imperfect randomness. Like IND-CCA, and unlike IND-CDA, our notion gives the adversary the public key. We show that MM-CCA is achieved by RSA-OAEP in the random-oracle model; this is significant in practice because RSA-OAEP is directly available from high-level APIs across all libraries we surveyed. We sort out relationships among the various notions, and also develop new results for existing hedged PKE constructions.
Expand
Fang Song, Aaram Yun
ePrint Report ePrint Report
We prove the security of NMAC, HMAC, AMAC, and the cascade construction with fixed input-length as quantum-secure pseudo-random functions (PRFs). Namely, they are indistinguishable from a random oracle against any polynomial-time quantum adversary that can make quantum superposition queries. In contrast, many blockcipher-based PRFs including CBC-MAC were recently broken by quantum superposition attacks.

Classical proof strategies for these constructions do not generalize to the quantum setting, and we observe that they sometimes even fail completely (e.g., the universal-hash then PRF paradigm for proving security of NMAC). Instead, we propose a direct hybrid argument as a new proof strategy (both classically and quantumly). We first show that a quantum-secure PRF is secure against key-recovery attacks, and remains secure under random leakage of the key. Next, as a key technical tool, we extend the oracle indistinguishability framework of Zhandry in two directions: we consider distributions on functions rather than strings, and we also consider a relative setting, where an additional oracle, possibly correlated with the distributions, is given to the adversary as well. This enables a hybrid argument to prove the security of NMAC. Security proofs for other constructions follow similarly.
Expand
Victor Cauchois, Clément Gomez, Reynald Lercier
ePrint Report ePrint Report
We consider highly structured truncated differential paths to mount rebound attacks on hash functions based on AES-like permutations. We explain how such differential paths can be computed using a Mixed-Integer Linear Programming approach. Together with the SuperSBox description, this allows us to build a rebound attack with a $6$-round inbound phase whereas classical rebound attacks have $4$-round inbound phases. Non-square AES-like permutations seem to be more vulnerable than square ones. We illustrate this new technique by mounting the first distinguishing attack on a $11$-round version of Gr\o{}stl-$512$ internal permutation $P_{1024}$ with $\mathit{O}(2^{72})$ computational complexity and $\mathit{O}(2^{56})$ memory complexity, to be compared with the $\mathit{O} (2^{96})$ required computations of the corresponding generic attack. Previous best results on this permutation reached $10$ rounds with a computational complexity of $\mathit{O}(2^{392})$, to be compared with $\mathit{O}(2^{448})$ required by the corresponding generic attack.
Expand
Maciej Obremski, Maciej Skorski
ePrint Report ePrint Report
Universal hashing found a lot of applications in computer science. In cryptography the most important fact about universal families is the so called Leftover Hash Lemma, proved by Impagliazzo, Levin and Luby. In the language of modern cryptography it states that almost universal families are good extractors.

In this work we provide a somewhat surprising characterization in the opposite direction. Namely, every extractor with sufficiently good parameters yields a universal family on a noticeable fraction of its inputs.

Our proof technique is based on tools from extremal graph theory applied to the ”collision graph” induced by the extractor, and may be of independent interest. We discuss implications for randomness extractors and non-malleable codes.
Expand
Maciej Skorski
ePrint Report ePrint Report
Barak et al. (CRYPTO'11) initiated the study of so called square-friendly applications which offer good security for keys with entropy deficiency (weak keys), for this reason being important for key derivation. The state of the art of security bounds was established by Dodis and Yu (TCC'13), by modeling "weak" keys as distributions of high collision entropy.

In this paper we answer the question what is the minimum requirement on weak keys to be "good" for these applications. The answer gives an elegant operational meaning to the notion of smooth collision entropy. Namely, smooth collision entropy is both sufficient and necessary (with essentially the same entropy parameters) to guarantee the security of square-friendly applications under weak keys. This characterization is a consequence of constrained optimization techniques.
Expand
Thomas Espitau, Pierre-Alain Fouque, Benoit Gerard, Mehdi Tibouchi
ePrint Report ePrint Report
In this paper, we investigate the security of the BLISS lattice-based signature scheme, one of the most promising candidates for post-quantum-secure signatures, against side-channel attacks. Several works have been devoted to its efficient implementation on various platforms, from desktop CPUs to micro-controllers and FPGAs, and more recent papers have also considered its security against certain types of physical attacks, notably fault injection and cache attacks. We turn to more traditional side-channel analysis, and describe several attacks that can yield a full key recovery.

We first identify a serious source of leakage in the rejection sampling algorithm used during signature generation. Existing implementations of that rejection sampling step, which is essential for security, actually leak the ``relative norm'' of the secret key. We show how an extension of an algorithm due to Howgrave-Graham and Szydlo can be used to recover the key from that relative norm, at least when the absolute norm is easy to factor (which happens for a significant fraction of secret keys). We describe how this leakage can be exploited in practice both on an embedded device (an 8-bit AVR microcontroller) using electromagnetic analysis (EMA), and a desktop computer (recent) Intel CPU running Linux) using branch tracing. The latter attack has been mounted against the open source VPN software strongSwan.

We also show that other parts of the BLISS signing algorithm can leak secrets not just for a subset of secret keys, but for 100% of them. The BLISS Gaussian sampling algorithm in strongSwan is intrinsically variable time. This would be hard to exploit using a noisy source of leakage like EMA, but branch tracing allows to recover the entire randomness and hence the key: we show that a single execution of the strongSwan signature algorithm is actually sufficient for full key recovery. We also describe a more traditional side-channel attack on the sparse polynomial multiplications carried out in BLISS: classically, multiplications can be attacked using DPA; however, our target 8-bit AVR target implementation uses repeated shifted additions instead. Surprisingly, we manage to obtain a full key recovery in that setting using integer linear programming from a single EMA trace.
Expand
Craig Costello, Huseyin Hisil
ePrint Report ePrint Report
We derive a new formula for computing arbitrary odd-degree isogenies between elliptic curves in Montgomery form. The formula lends itself to a simple and compact algorithm that can efficiently compute any low odd-degree isogenies inside the supersingular isogeny Diffie-Hellman (SIDH) key exchange protocol. Our implementation of this algorithm shows that, beyond the commonly used 3-isogenies, there is a moderate degradation in relative performance of (2d+1)-isogenies as d grows, but that larger values of d can now be used in practical SIDH implementations.

We further show that the proposed algorithm can be used to compute both isogenies of curves and evaluate isogenies at points, unifying the two main types of functions needed for isogeny-based public-key cryptography. Together, these results open the door for practical SIDH on a much wider class of curves, and allow for simplified SIDH implementations that only need to call one general-purpose function inside the fundamental computation of the large degree secret isogenies.

As an auxiliary contribution, we also give new explicit formulas for 3- and 4-isogenies, and show that these give immediate speedups when substituted into pre-existing SIDH libraries.
Expand
Guilhem Castagnos, Laurent Imbert, Fabien Laguillaumie
ePrint Report ePrint Report
At CRYPTO 2016, Couteau, Peters and Pointcheval introduced a new primitive called Encryption Switching Protocols, allowing to switch ciphertexts between two encryption schemes. If such an ESP is built with two schemes that are respectively additively and multiplicatively homomorphic, it naturally gives rise to a secure 2-party computation protocol. It is thus perfectly suited for evaluating functions, such as multivariate polynomials, given as arithmetic circuits. Couteau et al. built an ESP to switch between Elgamal and Paillier encryptions which do not naturally fit well together. Consequently, they had to design a clever variant of Elgamal over $\mathbf{Z}/n\mathbf{Z}$ with a costly shared decryption.

In this paper, we first present a conceptually simple generic construction for encryption switching protocols. We then give an efficient instantiation of our generic approach that uses two well-suited protocols, namely a variant of Elgamal in $\mathbf{Z}/p\mathbf{Z}$ and the Castagnos-Laguillaumie encryption which is additively homomorphic over $\mathbf{Z}/p\mathbf{Z}$. Among other advantages, this allows to perform all computations modulo a prime $p$ instead of an RSA modulus. Overall, our solution leads to significant reductions in the number of rounds as well as the number of bits exchanged by the parties during the interactive protocols. We also show how to extend its security to the malicious setting.
Expand
Bita Darvish Rouhani, M. Sadegh Riazi, Farinaz Koushanfar
ePrint Report ePrint Report
This paper proposes DeepSecure, a novel framework that enables scalable execution of the state-of-the-art Deep Learning (DL) models in a privacy-preserving setting. DeepSecure targets scenarios in which neither of the involved parties including the cloud servers that hold the DL model parameters or the delegating clients who own the data is willing to reveal their information. Our framework is the first to empower accurate and scalable DL analysis of data generated by distributed clients without sacrificing the security to maintain efficiency. The secure DL computation in DeepSecure is performed using Yao’s Garbled Circuit (GC) protocol. We devise GC-optimized realization of various components used in DL. Our optimized implementation achieves more than 58-fold higher throughput per sample compared with the best prior solution. In addition to our optimized GC realization, we introduce a set of novel low-overhead pre-processing techniques which further reduce the GC overall runtime in the context of deep learning. Extensive evaluations of various DL applications demonstrate up to two orders-of-magnitude additional runtime improvement achieved as a result of our pre-processing methodology. We also provide mechanisms to securely delegate GC computations to a third party in constrained embedded settings.
Expand

01 June 2017

Felix Günther, Sogol Mazaheri
ePrint Report ePrint Report
Secure channel protocols protect data transmission over a network from being overheard or tampered with. In the common abstraction, cryptographic models for channels involve a single key for ensuring the central security notions of confidentiality and integrity. The currently developed next version of the Transport Layer Security protocol, TLS 1.3, however introduces a key updating mechanism in order to deploy a sequence of multiple, possibly independent encryption keys in its channel sub-protocol. This design aims at achieving forward security, protecting prior communication after long-term key corruption, as well as security of individual channel phases even if the key in other phases is leaked (a property we denote as phase-key insulation). Neither of these security aspects has been treated formally in the context of cryptographic channels so far, leading to a current lack of techniques to evaluate such channel designs cryptographically.

We approach this gap by introducing the first formal model of multi-key channels, where sender and receiver can update their shared secret key during the lifetime of the channel without interrupting the communication. We present modular, game-based notions for confidentiality and integrity, integrating forward security and phase-key insulation as two advanced security aspects. As we show, our framework of notions on the lower end of its hierarchy naturally connects to the existing notions of stateful encryption established for single-key channels. Like for classical channels, it further allows for generically composing chosen-ciphertext confidentiality from chosen-plaintext confidentiality and ciphertext integrity. We instantiate the strongest security notions in our model with a construction based on authenticated encryption with associated data and a pseudorandom function. Being comparatively close, our construction additionally enables us to discuss the TLS 1.3 record protocol design.
Expand
Michel Abdalla, Fabrice Benhamouda, Alain Passelègue
ePrint Report ePrint Report
Due to the vast number of successful related-key attacks against existing block-ciphers, related-key security has become a common design goal for such primitives. In these attacks, the adversary is not only capable of seeing the output of a function on inputs of its choice, but also on related keys. At Crypto 2010, Bellare and Cash proposed the first construction of a pseudorandom function that could provably withstand such attacks based on standard assumptions. Their construction, as well as several others that appeared more recently, have in common the fact that they only consider linear or polynomial functions of the secret key over complex groups. In reality, however, most related-key attacks have a simpler form, such as the XOR of the key with a known value. To address this problem, we propose the first construction of RKA secure pseudorandom function for XOR relations. Our construction relies on multilinear maps and, hence, can only be seen as a feasibility result. Nevertheless, we remark that it can be instantiated under two of the existing multilinear-map candidates since it does not reveal any encodings of zero. To achieve this goal, we rely on several techniques that were used in the context of program obfuscation, but we also introduce new ones to address challenges that are specific to the related-key-security setting.
Expand
Fuchun Guo, Rongmao Chen, Willy Susilo, Jianchang Lai, Guomin Yang, Yi Mu
ePrint Report ePrint Report
Optimal security reductions for unique signatures (Coron, Eurocrypt 2002) and their generalization, i.e., efficiently re-randomizable signatures (Hofheinz et al., PKC 2012 and Baderet al., Eurocrypt 2016) have been well studied in the literature. Particularly, it has been shown that under a non-interactive hard assumption, any security reduction (with or without random oracles) for a unique signature scheme or an efficiently re-randomizable signature scheme must loose a factor of at least $q_s$ in the security model of existential unforgeability against chosen-message attacks (EU-CMA), where $q_s$ denotes the number of signature queries. Note that the number $q_s$ can be as large as $2^{30}$ in practice. All unique signature schemes and efficiently re-randomizable signature schemes are concluded to be accompanied with loose reductions from these impossibility results.

Somewhat surprisingly, in contrast to previous impossibility results (Coron, Eurocrypt 2002; Hofheinz et al., PKC 2012; Bader et al., Eurocrypt 2016), in this work we show that without changing the assumption type and security model, it is not always the case that any security reduction must loose a factor of at least $q_s$. As a counterexample, we propose a unique signature scheme with a tight reduction in the EU-CMA security model under the Computational Diffie-Hellman (CDH) assumption. Precisely, in the random oracle model, we can program a security reduction with a loss factor of at most $nq^{1/{n}}$, where $n$ can be any integer independent of the security parameter for the scheme construction and $q$ is the number of hash queries to random oracles. The loss factor in our reduction can be very small. Considering $n=25$ and $q=2^{50}$ as an example, the loss factor is of at most $nq^{1/{n}}=100$ and therefore our security reduction is tight.

Notice that the previous impossibility results are derived from proofs via a so-called meta-reduction technique. We stress that instead of indicating any flaw in their meta-reduction proofs, our counterexample merely demonstrates that their given meta-reduction proofs fail to capture all security reductions. More precisely, we adopt a reduction called query-based reduction, where the reduction uses a hash query from the adversary to solve an underlying hard problem. We show that the meta-reduction proofs break down in our query-based reduction. The query-based reduction is not a new notion and it has been adopted for encryption proofs, but this work is the first seminal approach for applying query-based reduction in digital signatures.

The given counterexample in this work is of an independent interest as it implies a generic way of constructing a digital signature scheme (including unique signatures) with a tight reduction in the random oracle model from a digital signature scheme with a loose reduction. Although our proposed methodology is somewhat impractical due to the inefficiency of signature length, it introduces a completely new approach for tight proofs that is different from traditional approaches using a random salt.
Expand
◄ Previous Next ►