International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

26 September 2016

Wesleyan University, Middletown Connecticut, USA
Job Posting Job Posting

The Department of Mathematics and Computer Science at Wesleyan University invites applications for a tenure track assistant professorship in mathematics to begin in fall, 2017. Candidates must possess, or be close to completing, a Ph.D. or equivalent degree, and must have strong records in both research and teaching.

We seek strong candidates whose research interests are in applied mathematics, broadly construed, and which are compatible with the existing research interests of the department. Relevant areas include, but are very much not limited to, algebraic statistics, applied topology, coding theory, discrete and computational geometry, cryptography, optimization, and stochastic processes. Exceptional candidates in any field compatible with the research interests of the faculty are also encouraged to apply.

Teaching duties will be two courses in each semester, typically including a graduate-level course. Additional duties include advising and mentoring students, and participating in faculty governance at the departmental and university level.

For full consideration applications must be received by November 15, 2016, and must include a cover letter, curriculum vitae, research statement, teaching statement, and at least four letters of recommendation, one of which evaluates teaching. As part of the teaching statement, we invite you to describe your cultural competencies and experiences engaging a diverse student body. Applications must be submitted online at mathjobs.org.

For a full version of our job description, please visit

https://www.mathjobs.org/jobs/jobs/8903

Closing date for applications: 15 November 2016

Contact: Adam Fieldsteel, Chair, Department of Mathematics and Computer Science, Wesleyan University, Middletown, CT 06459,

email: afieldsteel (at) wesleyan.edu

More information: https://www.mathjobs.org/jobs/jobs/8903

Expand
Security Lab, SICS Swedish ICT, Sweden
Job Posting Job Posting
The Security lab at SICS Swedish ICT, Sweden, invites applications for a 1-year postdoctoral researcher position in the area of either Cloud infrastructure, Software Defined Networking (SDN) or Internet of Things (IoT), to begin in November 2016 or later (latest starting date January 1, 2017). The position can be in Stockholm or Lund depending on the qualified applicant preference.

The successful applicant will have a solid background in one or more of the following topics:

• Network virtualization security

• Secure cloud orchestration

• Platform virtualization security

• Secure and robust communication in the IoT

• Secure and efficient key management for the IoT

• Security and privacy for authorization in the IoT

Applicants with either theoretical skills or practical implementation skills are welcome. The successful applicants are expected to take an active role in the scientific research projects conducted in the lab and will have the opportunity to participate in the co-supervision of Masters or PhD students. There is also the potential to participate in industry collaborations. In order to increase gender equality, we especially welcome female applicants.

Requirements:

• Master’s degree (or equivalent) in Computer Science, Mathematics, or a similar discipline

• Extensive knowledge in the areas of IT security, proven in the form of publications in these areas.

• Excellent English language skills

• Applicants must be cleared to graduate from their PhD program by the commencement of the position.

Monthly salary: 35.000 SEK

Closing date for applications: 15 October 2016

Contact: Christian Gehrmann

Lab Manager, SICS Security

More information: https://www.sics.se/groups/security-lab-sec#node-12248

Expand

24 September 2016

Subhadeep Banik, Andrey Bogdanov, Francesco Regazzoni
ePrint Report ePrint Report
The implementation of the AES encryption core by Moradi et al. at Eurocrypt 2011 is one of the smallest in terms of gate area. The circuit takes around 2400 gates and operates on an 8 bit datapath. However this is an encryption only core and unable to cater to block cipher modes like CBC and ELmD that require access to both the AES encryption and decryption modules. In this paper we look to investigate whether the basic circuit of Moradi et al. can be tweaked to provide dual functionality of encryption and decryption (ENC/DEC) while keeping the hardware overhead as low as possible. As a result, we report an 8-bit serialized AES circuit that provides the functionality of both encryption and decryption and occupies around 2645 GE with a latency of 226 cycles. This is a substantial improvement over the next smallest AES ENC/DEC circuit (Grain of Sand) by Feldhofer et al. which takes around 3400 gates but has a latency of over 1000 cycles for both the encryption and decryption cycles.
Expand
Matthias Hamann, Matthias Krause, Willi Meier
ePrint Report ePrint Report
Time-memory-data (TMD) tradeoff attacks limit the security level of many classical stream ciphers (like E0, A5/1, Trivium, Grain) to $n/2$, where $n$ denotes the inner state length of the underlying keystream generator. In this paper, we present LIZARD, a lightweight stream cipher for power-constrained devices like passive RFID tags. Its hardware efficiency results from combining a Grain-like design with the FP(1)-mode, a recently suggested construction principle for the state initialization of stream ciphers, which offers provable ($2n/3$)-security against TMD tradeoff attacks aiming at key recovery. LIZARD uses 120-bit keys, 64-bit IVs and has an inner state length of 121 bit. It is supposed to provide 80-bit security against key recovery attacks. LIZARD allows to generate up to $2^{18}$ keystream bits per key/IV pair, which would be sufficient for many existing communication scenarios like Bluetooth, WLAN or HTTPS.
Expand
Liang Wang, Rafael Pass, abhi shelat, Thomas Ristenpart
ePrint Report ePrint Report
We introduce secure channel injection (SCI) protocols, which allow one party to insert a private message into another party's encrypted communications. We construct an efficient SCI protocol for communications delivered over TLS, and use it to realize anonymous proofs of account ownership for SMTP servers. This allows alice@mail.com to prove ownership of some email address @mail.com, without revealing ``alice'' to the verifier. We show experimentally that our system works with standard email server implementations as well as Gmail. We go on to extend our basic SCI protocol to realize a ``blind'' certificate authority: the account holder can obtain a valid X.509 certificate binding alice@mail.com to her public key, if it can prove ownership of some email address @mail.com. The authority never learns which email account is used.
Expand
Koh-ichi Nagao
ePrint Report ePrint Report
In 2012, Petit et al. shows that under the algebraic geometrical assumption named "First Fall degree Assumption", the complexity of ECDLP over binary extension field ${\bf F}_{2^n}$ is in $O(exp(n^{2/3+o(1)}))$ where $\lim_{n \to \infty} o(1)=0$ and there are many generalizations and improvements for the complexity of ECDLP under this assumption. In 2015, the author proposes the bit coincidence mining algorithm, which states that under the heuristic assumption of the complexity of xL algorithm, the complexity of ECDLP $E/{\bf F}_q$ over arbitrary finite field including prime field, is in $O(exp(n^{1/2+o(1)}))$ where $n \sim \log_2 \#E({\bf F}_q) \sim \log_2 q$. It is the first (heuristic) algorithm for solving ECDLP over prime field in subexponential complexity. In both researches, ECDLP reduces to solving large equations system and from each assumption, the complexity for solving reduced equations system is subexponential (or polynomial) complexity. However, the obtained equations system is too large for solving in practical time and space, they are only the results for the complexity.

xL algorithm, is the algorithm for solving quadratic equations system, which consists of $n$ variables and $m$ equations. Here, $n$ and $m$ are considered as parameters. Put $D=D(n,m)$ by the maximal degree of the polynomials, which appears in the computation of solving equations system by xL. Courtois et al. observe and assume the following assumption;

1) There are small integer $C_0$, such that $D(n,n+C_0)$ is usually in $O(\sqrt{n})$, and the cost for solving equations system is in $O(exp(n^{1/2+0(1)}))$. However, this observation is optimistic and it must have the following assumption

2) The equations system have small number of the solutions over algebraic closure. (In this draft we assume the number of the solutions is 0 or 1)

In the previous version's bit coincidence mining algorithm (in 2015), the number of the solutions of the desired equations system over algebraic closure is small and it can be probabilistically controlled to be 1 and the assumption 2) is indirectly true. For my sense, the reason that xL algorithm, which is the beautiful heuristic, is not widely used is that the general equations system over finite field does not satisfy the assumption 2) (there are many solutions over algebraic closure) and is complexity is much larger.

In the previous draft, I show that the ECDLP of $E({\bf F}_q)$ reduces to solving equations system consists of $d-1$ variables and $d+C_0-1$ equations where $C_0$ is an arbitrary positive integer and $d \sim C_0 \times \log_2 q$. So, the complexity for solving ECDLP is in subexponential under the following assumption

a) There are some positive integer $C_0$ independent from $n$, such that solving quadratic equations system consists of $n$ variables and $m=n+C_0$ equations (and we must assume the assumption 2)) by xL algorithm, the maximum degree of the polynomials $D=D(n,m)$, appears in this routine is in $O(\sqrt{n})$ in high probability.

Here, we propose the new algorithm that ECDLP of $E({\bf F}_q)$ is essentially reducing to solving equations system consists of $d-1$ variables and $\frac{b_0}{2}d$ equations where $b_0(\ge 2)$ is an arbitrary positive integer named block size and $d \sim (b_0-1)\log_{b_0} q$. Here, we mainly treat the case block size $b_0=3$. In this case, ECDLP is essentially reducing to solving equations system consists of about $2 \log_3 q$ variables and $3 \log_3 q$ equations. So that the desired assumption 1) is always true. Moreover, the number of the solutions (over algebraic closure) of this equations system can be probabilistically controlled to be 1 and the desired assumption 2) is also true.

In the former part of this manuscript, the author states the algorithm for the construction of equations system that ECDLP is reduced and in the latter part of this manuscript, the author state the ideas and devices in order for increasing the number of the equations, which means the obtained equations system is easily solved by xL algorithm.
Expand
Erick Nascimento, Lukasz Chmielewski, David Oswald, Peter Schwabe
ePrint Report ePrint Report
Side-channel attacks against implementations of elliptic-curve cryptography have been extensively studied in the literature and a large tool-set of countermeasures is available to thwart different attacks in different contexts. The current state of the art in attacks and countermeasures is nicely summarized in multiple survey papers, the most recent one by Danger et al. However, any combination of those countermeasures is ineffective against attacks that require only _a single trace_ and directly target a conditional move (cmov) -- an operation that is at the very foundation of all scalar-multiplication algorithms. This operation can either be implemented through arithmetic operations on registers or through various different approaches that all boil down to loading from or storing to a secret address. In this paper we demonstrate that such an attack is indeed possible for ECC software running on AVR ATmega microcontrollers, using a protected version of the popular $\mu$NaCl library as an example. For the targeted implementations, we are able to recover 99.6% of the key bits for the arithmetic approach and 95.3% of the key bits for the approach based on secret addresses, with confidence levels 76.1% and 78.8%, respectively. All publicly available ECC software for the AVR that we are aware of uses one of the two approaches and is thus in principle vulnerable to our attacks.
Expand
Wei Yang, Yuchen Cao, Ke Ma, Hailong Zhang, Yongbin Zhou, Baofeng Li
ePrint Report ePrint Report
Evaluating the side-channel attacks (SCAs) resilience of a crypto device is important and necessary. The SCAs-secure evaluation criteria includes the information theoretic metric and the security metric. The former metric measures the leakage amount of a crypto device. It should be independent with the evaluator. However, the current metrics, e.g. mutual information (MI), conditional entropy and perceived information, are related to the leakage model selected by the evaluator. They only reflect the leakage utilization, rather than the real leakage level of a crypto device. In light of this, we analysis the side-channel as a communication channel and develop two objective metrics, the average MI and capacity of the channel, to characterize the real leakage amount and its upper bound of a crypto device through communication theory. Although the channel capacity is a rough estimation of the leakage amount of the device, it can furnish the leakage amount at the worst case scenario the device may leak. We investigate the estimation methods of the two metric in different noise scenes. Besides, a leakage detection method based on consistency check is developed subsequently. The proposed method are capable of finding the Point-Of-Interests (POIs) in leakage traces and introducing few leakage points cannot be used to mount SCAs. The experiments show the effectiveness of the proposed method.
Expand
Houssem Maghrebi, Thibault Portigliatti, Emmanuel Prouff
ePrint Report ePrint Report
Template attack is the most common and powerful profiled side channel attack. It relies on a realistic assumption regarding the noise of the device under attack: the probability density function of the data is a multivariate Gaussian distribution. To relax this assumption, a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques. The obtained results are commensurate, and in some particular cases better, compared to template attack. In this work, we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning. Our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations.
Expand

21 September 2016

Paul Grubbs, Richard McPherson, Muhammad Naveed, Thomas Ristenpart, Vitaly Shmatikov
ePrint Report ePrint Report
We develop a systematic approach for analyzing client-server applications that aim to hide sensitive user data from untrusted servers. We then apply it to Mylar, a framework that uses multi-key searchable encryption (MKSE) to build Web applications on top of encrypted data. We demonstrate that (1) the Popa-Zeldovich model for MKSE does not imply security against either passive or active attacks; (2) Mylar-based Web applications reveal users’ data and queries to passive and active adversarial servers; and (3) Mylar is generically insecure against active attacks due to system design flaws. Our results show that the problem of securing client-server applications against actively malicious servers is challenging and still unsolved. We conclude with general lessons for the designers of systems that rely on property-preserving or searchable encryption to protect data from untrusted servers.
Expand
Iddo Bentov, Rafael Pass, Elaine Shi
ePrint Report ePrint Report
Decentralized cryptocurrencies have pushed deployments of distributed consensus to more stringent environments than ever before. Most existing protocols rely on proofs-of-work which require expensive computational puzzles to enforce, imprecisely speaking, “one vote per unit of computation”. The enormous amount of energy wasted by these protocols has been a topic of central debate, and well-known cryptocurrencies have announced it a top priority to alternative paradigms. Among the proposed alternative solutions, proofs-of-stake protocols have been of particular interest, where roughly speaking, the idea is to enforce “one vote per unit of stake”. Although the community have rushed to propose numerous candidates for proofs-of-stake, no existing protocol has offered formal proofs of security, which we believe to be a critical, indispensible ingredient of a distributed consensus protocol, particularly one that is to underly a high-value cryptocurrency system.

In this work, we seek to address the following basic questions:

• What kind of functionalities and robustness requirements should a consensus candidate offer to be suitable in a proof-of-stake application?

• Can we design a provably secure protocol that satisfies these requirements?

To the best of our knowledge, we are the first to formally articulate a set of requirements for consensus candidates for proofs-of-stake. We argue that any consensus protocol satisfying these properties can be used for proofs-of-stake, as long as money does not switch hands too quickly. Moreover, we provide the first consensus candidate that provably satisfies the desired robustness properties.
Expand
Iddo Bentov, Rafael Pass, Elaine Shi
ePrint Report ePrint Report
The distributed systems literature adopts two primary network models, the synchronous model where honest messages are delivered in the next round, and the partially synchronous (or asynchronous) model where honest messages are subject to unpredictable adversarial delays. In this paper, we show that more nuanced formal models exist beyond the traditional synchrony and asynchrony stratification -- and interestingly, such new models allow us to articulate new robustness properties that traditional models would have failed to capture. More specifically, we articulate a new formal model called “the sleepy model of consensus”, where we classify honest nodes as being either alert or sleepy. Alertness implies that the node is online and has good network connections; whereas sleepiness captures any type of failure or network jitter. We then describe the Sleepy consensus protocol that achieves security as long as at any time, the number of alert nodes outnumber corrupt ones. No classical synchronous or asynchronous protocols attain such robustness guarantees, and yet we show how to leverage Nakamoto’s blockchain protocol, but without proofs-of-work, to achieve these properties, assuming collision resistant hash functions, the existence of a public-key infrastructure and a common reference string.
Expand
Rafael Pass, Elaine Shi
ePrint Report ePrint Report
Consensus, or state machine replication is a foundational building block of distributed systems and modern cryptography. Consensus in the classical, permissioned setting has been extensively studied in the 30 years of distributed systems literature. Recent developments in Bitcoin and other decentralized cryptocurrencies popularized a new form of consensus in a “permissionless” setting, where anyone can join and leave dynamically, and there is no a-priori knowledge of the consensus nodes. Despite this exciting breakthrough, today’s permissionless consensus protocols, often referred to as “blockchains”, are known to have terrible performance, which has resulted in heated, and at times acrimonious debates in the community. First, we show that unfortunately a performance loss is inherent for any protocol that secures against at least 1/3 corruptions in hashpower. Specifically, we formally define a new performance measure called responsiveness, and show that any responsive permissionless consensus protocol cannot tolerate 1/3 or more corruptions in hashpower. Next, we show a tightly matching uppper bound. Specifically, we propose a new permissionless consensus protocol called hybrid consensus, that is responsive and secures against up to 1/3 corruptions in hashpower. Hybrid consensus's idea is to bootstrap fast permissionless consensus by combining an inefficient blockchain protocol with a fast permissioned consensus protocol. Hybrid consensus uses the blockchain not to agree on transactions, but to agree on rotating committees which in turn execute permissioned consensus protocols to agree on transactions. While the high-level idea is intuitive, formally instantiating and reasoning about the protocol exposed a multitude of non-trivial technical subtleties and challenges.
Expand
Rafael Pass, Elaine Shi
ePrint Report ePrint Report
Nakamoto's famous blockchain protocol enables achieving consensus in a so-called permissionless setting---anyone can join (or leave) the protocol execution, and the protocol instructions do not depend on the identities of the players. His ingenious protocol prevents ``sybil attacks'' (where an adversary spawns any number of new players) by relying on computational puzzles (a.k.a. ``moderately hard functions') introduced by Dwork and Naor (Crypto'92). Recent work by Garay et al (EuroCrypt'15) and Pass et al (manuscript, 2016) demonstrate that this protocol provably achieves consistency and liveness assuming a) honest players control a majority of the computational power in the network, b) the puzzle-hardness is appropriately set as a function of the maximum network delay and the total computational power of the network, and c) the computational puzzle is modeled as a random oracle.

Assuming honest participation, however, is a strong assumption, especially in a setting where honest players are expected to perform a lot of work (to solve the computational puzzles). In Nakamoto's Bitcoin application of the blockchain protocol, players are incentivized to solve these puzzles by receiving rewards for every ``blocks'' (of transactions) they contribute to the blockchain. An elegant work by Eyal and Sirer (FinancialCrypt'14), strengthening and formalizing an earlier attack discussed on the Bitcoin forum, demonstrates that a coalition controlling even a minority fraction of the computational power in the network can gain (close to) 2 times its ``fair share'' of the rewards (and transation fees) by deviating from the protocol instructions. In contrast, in a fair protocol, one would expect that players controlling a $\phi$ fraction of the computational resources to reap a $\phi$ fraction of the rewards.

In this work, we present a new blockchain protocol---the FruitChain protocol---which satisfies the same consistency and liveness properties as Nakamoto's protocol (assuming an honest majority of the computing power), and additionally is $\delta$-approximately fair: with overwhelming probability, any honest set of players controlling a $\phi$ fraction of computational power is guaranteed to get at least a fraction $(1 - \delta) \phi$ of the blocks (and thus rewards) in any $Omega( \kappa/\delta )$ length segment of the chain (where $\kappa$ is the security parameter).

As a consequence, if this blockchain protocol is used as the ledger underlying a cryptocurrency system, where rewards and transaction fees are evenly distributed among the miners of blocks in a length kappa segment of the chain, no coalition controlling less than a majority of the computing power can gain more than a factor $(1 + 3\delta)$ by deviating from the protocol (i.e., honest participation is an $n/2$-coalition-safe $3\delta$-Nash equilibrium).

Finally, the fruit chain protocol enables decreasing the variance of mining rewards and as such significantly lessens (or even obliterates) the need for mining pools.
Expand
Melissa Chase, Sarah Meiklejohn
ePrint Report ePrint Report
In this paper, we initiate a formal study of transparency, which in recent years has become an increasingly critical requirement for the systems in which people place trust. We present the abstract concept of a transparency overlay, which can be used in conjunction with any system to give it provable transparency guarantees, and then apply the overlay to two settings: Certificate Transparency and Bitcoin. In the latter setting, we show that the usage of our transparency overlay eliminates the need to engage in mining and allows users to store a single small value rather than the entire blockchain. Our transparency overlay is generically constructed from a signature scheme and a new primitive we call a dynamic list commitment, which in practice can be instantiated using a collision-resistant hash function.
Expand
Gora Adj, Isaac Canales-Mart\'inez, Nareli Cruz-Cort\'es, Alfred Menezes, Thomaz Oliveira, Luis Rivera-Zamarripa, Francisco Rodr\'iguez-Henr\'iquez
ePrint Report ePrint Report
Since 2013 there have been several developments in algorithms for computing discrete logarithms in small-characteristic finite fields, culminating in a quasi-polynomial algorithm. In this paper, we report on our successful computation of discrete logarithms in the cryptographically-interesting characteristic-three finite field ${\mathbb F}_{3^{6 \cdot 509}}$ using these new algorithms; prior to 2013, it was believed that this field enjoyed a security level of 128 bits. We also show that a recent idea of Guillevic can be used to compute discrete logarithms in the cryptographically-interesting finite field ${\mathbb F}_{3^{6 \cdot 709}}$ using essentially the same resources as we expended on the ${\mathbb F}_{3^{6 \cdot 509}}$ computation. Finally, we argue that discrete logarithms in the finite field ${\mathbb F}_{3^{6 \cdot 1429}}$ can feasibly be computed today; this is significant because this cryptographically-interesting field was previously believed to enjoy a security level of 192 bits.
Expand
Busan, Korea, 13 February - 15 February 2017
Event Calendar Event Calendar
Event date: 13 February to 15 February 2017
Submission deadline: 5 October 2016
Notification: 15 November 2016
Expand

19 September 2016

Boru Gong, Yunlei Zhao
ePrint Report ePrint Report
Authenticated key-exchange (AKE) plays a fundamental role in modern cryptography. Up to now, the HMQV protocol family is among the most efficient provably secure AKE protocols, which has been widely standardized and in use. Given recent advances in quantum computing, it is highly desirable to develop lattice-based HMQV-analogue protocols for the upcoming post-quantum era. Towards this goal, an important step is recently made by Zhang et al. at Eurocrypt'15. Similar to HMQV, the HMQV-analogue protocols proposed there consists of two variants: a two-pass protocol $\Pi_2$, as well as a one-pass protocol $\Pi_1$ that implies, in turn, a signcryption scheme (named as ``deniable encryption"). All these protocols are claimed to be provably secure under the ring-LWE (RLWE) assumption.

In this work, we propose a new type of attack, referred to as small field attack (SFA), against the one-pass protocol $\Pi_1$, as well as its resultant deniable encryption scheme. With SFA, a malicious user can efficiently recover the static private key of the honest victim user in $\Pi_1$ with overwhelming probability. Moreover, the SFA attack is realistic and powerful in practice, in the sense that it is almost impossible for the honest user to prevent, or even detect, the attack. Besides, some new property regarding the CRT basis of $R_q$ is also developed in this work, which is essential for our small field attack and may be of independent interest.

The security proof of the two-pass protocol $\Pi_2$ is then revisited. We are stunk at Claim 16 in [ZZDS14], with a gap identified and discussed in the security proof. To us, we do not know how to fix the gap, which traces back to some critical differences between the security proof of HMQV and that of its RLWE-based analogue.
Expand
Gilles Barthe, François Dupressoir, Sebastian Faust, Benjamin Grégoire, François-Xavier Standaert, Pierre-Yves Strub
ePrint Report ePrint Report
In this paper, we provide a necessary clarification of the good security properties that can be obtained from parallel implementations of masking schemes. For this purpose, we first argue that (i) the probing model is not straightforward to interpret, since it more naturally captures the intuitions of serial implementations, and (ii) the noisy leakage model is not always convenient, e.g. when combined with formal methods for the verification of cryptographic implementations. Therefore we introduce a new model, the bounded moment model, that formalizes a weaker notion of security order frequently used in the side-channel literature. Interestingly, we prove that probing security for a serial implementation implies bounded moment security for its parallel counterpart. This result therefore enables an accurate understanding of the links between formal security analyses of masking schemes and experimental security evaluations based on the estimation of statistical moments. Besides its consolidating nature, our work also brings useful technical contributions. First, we describe and analyze refreshing and multiplication algorithms that are well suited for parallel implementations and improve security against multivariate side-channel attacks. Second, we show that simple refreshing algorithms (with linear complexity) that are not secure in the continuous probing model are secure in the continuous bounded moment model. Eventually, we discuss the independent leakage assumption required for masking to deliver its security promises, and its specificities related to the serial or parallel nature of an implementation.
Expand
Mohamed Saied Emam Mohamed, Albrecht Petzoldt
ePrint Report ePrint Report
Multivariate Cryptography is one of the main candidates for creating post quantum public key cryptosystems. Especially in the area of digital signatures, there exist many practical and secure multivariate schemes. In this paper we present a general technique to reduce the signature size of multivariate schemes. Our technique enables us to reduce the signature size of nearly all multivariate signature schemes by 10 to 15 % without slowing down the scheme significantly. We can prove that the security of the underlying scheme is not weakened by this modification. Furthermore, the technique enables a further reduction of the signature size when accepting a slightly more costly verification process. This trade off between signature size and complexity of the verification process can not be observed for any other class of digital signature schemes. By applying our technique to the Gui signature scheme, we obtain the shortest signatures of all existing digital signature schemes.
Expand
◄ Previous Next ►