International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

17 March 2020

Gil Segev, Ido Shahaf
ePrint Report ePrint Report
The hardness of highly-structured computational problems gives rise to a variety of public-key primitives. On one hand, the structure exhibited by such problems underlies the basic functionality of public-key primitives, but on the other hand it may endanger public-key cryptography in its entirety via potential algorithmic advances. This subtle interplay initiated a fundamental line of research on whether structure is inherently necessary for cryptography, starting with Rudich's early work (PhD Thesis '88) and recently leading to that of Bitansky, Degwekar and Vaikuntanathan (CRYPTO '17).

Identifying the structure of computational problems with their corresponding complexity classes, Bitansky et al. proved that a variety of public-key primitives (e.g., public-key encryption, oblivious transfer and even functional encryption) cannot be used in a black-box manner to construct either any hard language that has $\mathsf{NP}$-verifiers both for the language itself and for its complement, or any hard language (and even promise problem) that has a statistical zero-knowledge proof system -- corresponding to hardness in the structured classes $\mathsf{NP} \cap \mathsf{coNP}$ or $\mathsf{SZK}$, respectively, from a black-box perspective.

In this work we prove that the same variety of public-key primitives do not inherently require even very little structure in a black-box manner: We prove that they do not imply any hard language that has multi-prover interactive proof systems both for the language and for its complement -- corresponding to hardness in the class $\mathsf{MIP} \cap \mathsf{coMIP}$ from a black-box perspective. Conceptually, given that $\mathsf{MIP} = \mathsf{NEXP}$, our result rules out languages with very little structure.

Already the cases of languages that have $\mathsf{IP}$ or $\mathsf{AM}$ proof systems both for the language itself and for its complement, which we rule out as immediate corollaries, lead to intriguing insights. For the case of $\mathsf{IP}$, where our result can be circumvented using non-black-box techniques, we reveal a gap between black-box and non-black-box techniques. For the case of $\mathsf{AM}$, where circumventing our result via non-black-box techniques would be a major development, we both strengthen and unify the proofs of Bitansky et al. for languages that have $\mathsf{NP}$-verifiers both for the language itself and for its complement and for languages that have a statistical zero-knowledge proof system.
Expand
Gabrielle De Micheli, Pierrick Gaudry, Cécile Pierrot
ePrint Report ePrint Report
We study the discrete logarithm problem at the boundary case between small and medium characteristic finite fields, which is precisely the area where finite fields used in pairing-based cryptosystems live. In order to evaluate the security of pairing-based protocols, we thoroughly analyze the complexity of all the algorithms that coexist at this boundary case: the Quasi-Polynomial algorithms, the Number Field Sieve and its many variants, and the Function Field Sieve. We adapt the latter to the particular case where the extension degree is composite, and show how to lower the complexity by working in a shifted function field. All this study finally allows us to give precise values for the characteristic asymptotically achieving the highest security level for pairings. Surprisingly enough, there exist special characteristics that are as secure as general ones.
Expand
Simon Holmgaard Kamp, Bernardo Magri, Christian Matt, Jesper Buus Nielsen, Søren Eller Thomsen, Daniel Tschudi
ePrint Report ePrint Report
Existing Nakamoto-style blockchains (NSBs) rely on some sort of synchrony assumption to offer any type of safety guarantees. A basic requirement is that when a party produces a new block, then all previously produced blocks should be known to that party, as otherwise the new block might not append the current head of the chain, creating a fork. In practice, however, the network delay for parties to receive messages is not a known constant, but rather varies over time. The consequence is that the parameters of the blockchain need to be set such that the time between the generation of two blocks is typically larger than the network delay (e.g., $10$ minutes in Bitcoin) to guarantee security even under bad network conditions. This results in lost efficiency for two reasons: (1) Since blocks are produced less often, there is low throughput. Furthermore, (2) blocks can only be considered final, and thus the transactions inside confirmed, once they are extended by sufficiently many other blocks, which incurs a waiting time that is a multiple of 10 minutes. This is true even if the actual network delay is only $1$ second, meaning that NSBs are slow even under good network conditions.

We show how the Bitcoin protocol can be adjusted such that we preserve Bitcoin's security guarantees in the worst case, and in addition, our protocol can produce blocks arbitrarily fast and achieve optimistic responsiveness. The latter means that in periods without corruption, the confirmation time only depends on the (unknown) actual network delay instead of the known upper bound. Technically, we propose an approach where blocks are treated differently in the ``longest chain rule''. The crucial parameter of our protocol is a weight function assigning different weight to blocks according to their hash value. We present a framework for analyzing different weight functions, in which we prove all statements at the appropriate level of abstraction. This allows us to quickly derive protocol guarantees for different weight functions. We exemplify the usefulness of our framework by capturing the classical Bitcoin protocol as well as exponentially growing functions as special cases, where the latter provide the above mentioned guarantees, including optimistic responsiveness.
Expand
Anita John, Rohit Lakra, Jimmy Jose
ePrint Report ePrint Report
Cellular Automata (CA) have recently evolved as a good cryptographic primitive. It plays an important role in the construction of new fast, efficient and secure stream ciphers. Several studies have been made on CA based stream ciphers and we observe that the cryptographic strength of a CA based stream cipher increases with the increase in the neighbourhood radii if appropriate CA rules are employed. The current work explores the cryptographic feasibility of 5-neighbourhood CA rules also referred to as pentavalent rules. A new CA based stream cipher, CARPenter, which uses pentavalent rules have been proposed. The cipher incorporates maximum length null-boundary linear CA and a non-linear CA along with a good non-linear mixing function. This is implemented in hardware as well as software and exhibits good cryptographic properties which makes the cipher resistant to almost all attacks on stream ciphers, but with the cost of additional computing requirements. This cipher uses 16 cycles for initialization, which is the least number of cycles when compared to other existing stream ciphers.
Expand
John M. Schanck
ePrint Report ePrint Report
We give a new proof that the decryption failure rate of NewHope512 is at most $2^{-398.8}$. As in previous work, this failure rate is with respect to random, honestly generated, secret key and ciphertext pairs. However, our technique can also be applied to a fixed secret key. We demonstrate our technique on some subsets of the NewHope1024 key space, and we identify a large subset of NewHope1024 keys with failure rates of no more than $2^{-439.5}$.
Expand
Robert Muth, Florian Tschorsch
ePrint Report ePrint Report
Blockchain technologies enable decentralized applications (DApps) that run on distributed infrastructures without any central authority. All transactions for communication and data storage are public and can be verified by all participants. DApps interacting with a smart contract typically require client-side code, which is not part of the smart contract, and therefore do not hold the same verifiability properties. Following the vision of a verifiable DApp, we propose SmartDHX, a Diffie-Hellman key exchange (DHKE) scheme, fully implemented as a smart contract. That is, SmartDHX communicates only via the Ethereum blockchain and provides both backend and client-side code with the smart contract. The application code can therefore be verified and deployed without external trust requirements. By executing DHKE on-chain, we gain a number of properties, including asynchronicity as well as message integrity and authenticity. We generalize the two-party SmartDHX to emphasize that our approach is able to handle complex cryptographic protocols. In our analysis, we expose an efficiency tradeoff when executed on chain. In particular, we provide a proof-of-concept implementation and analyze the runtime and transaction fees. Since DHKE is used by many cryptographic algorithms, SmartDHX contributes a fundamental building block in the domain of DApps.
Expand
Bicky Shakya, Xiaolin Xu, Mark Tehranipoor, Domenic Forte
ePrint Report ePrint Report
Recently, a logic locking approach termed `CAS-Lock' was proposed to simultaneously counter Boolean satisfiability (SAT) and bypass attacks. The technique modifies the AND/OR tree structure in Anti-SAT to achieve non-trivial output corruptibility while maintaining resistance to both SAT and bypass attacks. An attack against CAS-Lock (dubbed `CAS-Unlock') was also recently proposed on a naive implementation of CAS-Lock. It relies on setting key values to all 1's or 0's to break CAS-Lock. In this short paper, we evaluate this attack's ineffectiveness and describe a misinterpretation of CAS-Lock's implementation.
Expand
Yibin Xu, Yangyu Huang, Jianhua Shao, George Theodorakopoulos
ePrint Report ePrint Report
Blockchain sharding is a promising approach to solving the dilemma between decentralisation and high performance (transaction throughput) for blockchain. The main challenge of Blockchain sharding systems is how to reach a decision on a statement among a sub-group (shard) of people while ensuring the whole population recognises this statement. Namely, the challenge is to prevent an adversary who does not have the majority of nodes globally but have the majority of nodes inside a shard. Most Blockchain sharding approaches can only reach a correct consensus inside a shard with at most $n/3$ evil nodes in a $n$ node system. There is a blockchain sharding approach which can prevent an incorrect decision to be reached when the adversary does not have $n/2$ nodes globally. However, the system can be stopped from reaching consensus (become deadlocked) if the adversary controls a smaller number of nodes.

In this paper, we present an improved Blockchain sharding approach that can withstand $n/2$ adversarial nodes and recover from deadlocks. The recovery is made by dynamically adjusting the number of shards and the shard size. A performance analysis suggests our approach has a high performance (transaction throughput) while requiring little bandwidth for synchronisation.
Expand
Andrew Loveless, Ronald Dreslinski, Baris Kasikci
ePrint Report ePrint Report
Multi-valued Byzantine Consensus (BC), in which $n$ processes must reach agreement on a single $L$-bit value, is an essential primitive in the design of distributed cryptographic protocols and fault-tolerant distributed systems. One of the most desirable traits for a multi-valued BC protocol is to be error-free. In other words, have zero probability of producing incorrect results.

The most efficient error-free multi-valued BC protocols are built as extension protocols, which reduce agreement on large values to agreement on small sequences of bits whose lengths are independent of $L$. The best extension protocols achieve $\mathcal{O}(Ln)$ communication complexity, which is optimal, when $L$ is large relative to $n$. Unfortunately, all known error-free and communication-optimal BC extension protocols require each process to broadcast at least $n$ bits with a binary Byzantine Broadcast (BB) protocol. This design limits the scalability of these protocols to many processes, since when $n$ is large, the binary broadcasts significantly inflate the overall number of bits communicated by the extension protocol.

In this paper, we present Byzantine Consensus with Parallel Execution (BCPE), the first error-free and communication-optimal BC extension protocol in which each process only broadcasts a single bit with a binary BB protocol. BCPE is a synchronous and deterministic protocol, and tolerates $f < n/3$ faulty processes (the best resilience possible). Our evaluation shows that BCPE's design makes it significantly more scalable than the best existing protocol by Ganesh and Patra. For 1,000 processes to agree on 2 MB of data, BCPE communicates $10.92\times$ fewer bits. For agreement on 10 MB of data, BCPE communicates $6.97\times$ fewer bits. BCPE also matches the best existing protocol in all other standard efficiency metrics.
Expand
Jose Maria Bermudo Mera, Furkan Turan, Angshuman Karmakar, Sujoy Sinha Roy, Ingrid Verbauwhede
ePrint Report ePrint Report
We present a domain-specific co-processor to speed up Saber, a post-quantum key encapsulation mechanism competing on the NIST Post-Quantum Cryptography standardization process. Contrary to most lattice-based schemes, Saber doesn’t use NTT-based polynomial multiplication. We follow a hardware-software co-design approach: the execution is performed on an ARM core and only the most computationally expensive operation, i.e., polynomial multiplication, is offloaded to the co-processor to obtain a compact design. We exploit the idea of distributed computing at micro-architectural level together with novel algorithmic optimizations to achieve approximately a 6 times speedup with respect to optimized software at a small area cost.
Expand

15 March 2020

Michel Abdalla, Manuel Barbosa, Tatiana Bradley, Stanislaw Jarecki, Jonathan Katz, Jiayu Xu
ePrint Report ePrint Report
Protocols for password authenticated key exchange (PAKE) allow two parties who share only a weak password to agree on a cryptographically strong key. We revisit the notion of PAKE in the framework of universal composability, and propose a relaxation of the PAKE functionality of Canetti et al. that we call lazy-extraction PAKE (lePAKE). Roughly, our relaxation allows the ideal-world adversary to postpone its password guess even until after a session is complete. We argue that this relaxed notion still provides meaningful security in the password-only setting. As our main result, we show that several PAKE protocols that were previously only proven secure with respect to a "game-based" definition can in fact be shown to realize the lePAKE functionality in the random-oracle model. These include SPEKE, SPAKE2, and TBPEKE, the most efficient PAKE schemes currently known.
Expand
Hayim Shaul, Dan Feldman, Daniela Rus
ePrint Report ePrint Report
The $k$-nearest neighbors ($k$NN) classifier predicts a class of a query, $q$, by taking the majority class of its $k$ neighbors in an existing (already classified) database, $S$. In secure $k$NN, $q$ and $S$ are owned by two different parties and $q$ is classified without sharing data. In this work we present a classifier based on $k$NN, that is more efficient to implement with homomorphic encryption (HE). The efficiency of our classifier comes from a relaxation we make to consider $\kappa$ nearest neighbors for $\kappa\approx k$ with probability that increases as the statistical distance between Gaussian and the distribution of the distances from $q$ to $S$ decreases. We call our classifier $k$-ish Nearest Neighbors ($k$-ish NN). For the implementation we introduce {\em double-blinded coin-toss} where the bias and output of the toss are encrypted. We use it to approximate the average and variance of the distances from $q$ to $S$ in a scalable circuit whose depth is independent of $|S|$. We believe these to be of independent interest. We implemented our classifier in an open source library based on HElib and tested it on a breast tumor database. Our classifier has accuracy and running time comparable to current state of the art (non-HE) MPC solution that have better running time but worse communication complexity. It also has communication complexity similar to naive HE implementation that have worse running time.
Expand
Huijia Lin, Ji Luo
ePrint Report ePrint Report
We present a new general framework for constructing compact and adaptively secure attribute-based encryption (ABE) schemes from $k$-Lin in asymmetric bilinear pairing groups. Previously, the only construction [Kowalczyk and Wee, Eurocrypt '19] that simultaneously achieves compactness and adaptive security from static assumptions supports policies represented by Boolean formulae. Our framework enables supporting more expressive policies represented by arithmetic branching programs.

Our framework extends to ABE for policies represented by uniform models of computation such as Turing machines. Such policies enjoy the feature of being applicable to attributes of arbitrary lengths. We obtain the first compact adaptively secure ABE for deterministic and non-deterministic finite automata (DFA and NFA) from $k$-Lin, previously unknown from any static assumptions. Beyond finite automata, we obtain the first ABE for large classes of uniform computation, captured by deterministic and non-deterministic logspace Turing machines (the complexity classes $\mathsf{L}$ and $\mathsf{NL}$) based on $k$-Lin. Our ABE scheme has compact secret keys of size linear in the description size of the Turing machine $M$. The ciphertext size grows linearly in the input length, but also linearly in the time complexity, and exponentially in the space complexity. Irrespective of compactness, we stress that our scheme is the first that supports large classes of Turing machines based solely on standard assumptions. In comparison, previous ABE for general Turing machines all rely on strong primitives related to indistinguishability obfuscation.
Expand
Archisman Ghosh, Debayan Das, Shreyas Sen
ePrint Report ePrint Report
Mathematically-secure cryptographic algorithms leak significant side-channel information through their power supplies when implemented on a physical platform. These side-channel leakages can be exploited by an attacker to extract the secret key of an embedded device. The existing state-of-the-art countermeasures mainly focus on the power balancing, gate-level masking, or signal-to-noise (SNR) reduction using noise injection and signature attenuation, all of which suffer either from the limitations of high power/area overheads, performance degradation or are not synthesizable. In this article, we propose a generic low-overhead digital-friendly power SCA countermeasure utilizing physical Time-Varying Transfer Functions (TVTF) by randomly shuffling distributed switched capacitors to significantly obfuscate the traces in the time domain. System-level simulation results of the TVTF-AES implemented in TSMC 65nm CMOS technology show > 4000x MTD improvement over the unprotected implementation with ~ 1.25x power and ~ 1.2x area overheads, and without any performance degradation.
Expand
Rishab Goyal, Sam Kim, Brent Waters, David J. Wu
ePrint Report ePrint Report
A software watermarking scheme provides a method to prevent unauthorized distribution of software. Specifically, watermarking schemes allow a user to embed an identifier into a piece of code such that the resulting program is nearly functionally-equivalent to the original program, and yet, it is difficult to remove the identifier without destroying the functionality of the program. Existing constructions of watermarking have focused primarily on watermarking pseudorandom functions (PRFs).

In this work, we revisit the definitional foundations of watermarking, and begin by highlighting a major limitation in the existing security notion. Currently, the security requirement in watermarking schemes only requires that the identifier be successfully extracted from programs that preserve the exact input/output behavior of the original program. In the context of PRFs, this means that an adversary that constructs a program which computes a quarter of the output bits of the PRF or that is able to distinguish the outputs of the PRF from random are considered to be outside the threat model. However, in any application (e.g., watermarking a decryption device or an authentication token) that relies on PRF security, an adversary that manages to predict a quarter of the bits or distinguishes the PRF outputs from random would be considered to have defeated the scheme. Thus, existing watermarking schemes provide very little, if any, security guarantee against realistic adversaries. We emphasize that none of the existing constructions of watermarkable PRFs would be able to extract the identifier from a program that only outputs a quarter of the bits of the PRF or one that perfectly distinguishes.

To address the shortcomings in existing watermarkable PRF definitions, we introduce a new primitive called a traceable PRF. Our definitions are inspired by similar definitions from public-key traitor tracing, and aim to capture a very robust set of adversaries: namely, any adversary that produces a useful distinguisher (i.e., a program that can break PRF security), can be traced to a specific identifier. We provide a general framework for constructing traceable PRFs via an intermediate primitive called private linear constrained PRFs. Finally, we show how to construct traceable PRFs from a similar set of assumptions previously used to realize software watermarking. Namely, we obtain a single-key traceable PRF from standard lattice assumptions and a fully collusion-resistant traceable PRF from indistinguishability obfuscation (together with injective one-way functions).
Expand
Ariel Gabizon, Zachary J. Williamson
ePrint Report ePrint Report
We present a protocol for checking the values of a committed polynomial $f\in \mathbb{F}_{<n}[X]$ over a multiplicative subgroup $H\subset \mathbb{F}$ of size $n$, are contained in the values of a table $t\in \mathbb{F}^d$. Our protocol can be viewed as a simplification of one from Bootle et. al [BCGJM, ASIACRYPT 2018] for a similar problem, with potential efficiency improvements when $d\leq n$. In particular, [BCGJM]'s protocol requires comitting to several auxiliary polynomials of degree $d\cdot \log n$, whereas ours requires three commitments to auxiliary polynomials of degree $n$, which can be much smaller in the case $d\sim n$. One common use case of this primitive in the zk-SNARK setting is a ``batched range proof'', where one wishes to check all of $f$'s values on $H$ are in a range $[0,\ldots,M]$. We present a slightly optimized protocol for this special case, and pose improving it as an open problem.
Expand
Shigeo Tsujii, Ryo Fujita, Masahito Gotaishi
ePrint Report ePrint Report
We have proposed before a multivariate public key cryptosystem (MPKC) that does not rely on the difficulty of prime factorization, and whose modulus is the product of many small prime numbers. In this system, the prime factorization by the attackers is self-trivial, and the structure of the secret key is based on CRT (Chinese Remainder Theorem). In this paper we propose MPKC with security of IND-CPA by adding random numbers to central transformation vectors in the system proposed before.
Expand
Victor Shoup
ePrint Report ePrint Report
We show that a slight variant of Protocol $\mathit{SPAKE2}+$, which was presented but not analyzed in Cash, Kiltz, and Shoup (2008) is a secure asymmetric password-authenticated key exchange protocol (PAKE), meaning that the protocol still provides good security guarantees even if a server is compromised and the password file stored on the server is leaked to an adversary. The analysis is done in the UC framework (i.e., a simulation-based security model), under the computational Diffie-Hellman (CDH) assumption, and modeling certain hash functions as random oracles. The main difference between our variant and the original Protocol~$\mathit{SPAKE2}+$ is that our variant includes standard key confirmation flows; also, adding these flows allows some slight simplification to the remainder of the protocol.

Along the way, we also: provide the first proof (under the same assumptions) that a slight variant of Protocol $\mathit{SPAKE2}$ from Abdalla and Pointcheval (2005) is a secure symmetric PAKE in the UC framework (previous security proofs were all in the weaker BPR framework of Bellare, Pointcheval, and Rogaway (2000); provide a proof (under very similar assumptions) that a variant of Protocol $\mathit{SPAKE2}+$ that is currently being standardized is also a secure asymmetric PAKE; repair several problems in earlier UC formulations of secure symmetric and asymmetric PAKE.
Expand
Sarang Noether
ePrint Report ePrint Report
Confidential transactions are used in distributed digital assets to demonstrate the balance of values hidden in commitments, while retaining signer ambiguity. Previous work describes a signer-ambiguous proof of knowledge of the opening of commitments to zero at the same index across multiple public commitment sets and the evaluation of a verifiable random function used as a linking tag, and uses this to build a linkable ring signature called Triptych that can be used as a building block for a confidential transaction model. In this work, we extend Triptych to build Triptych-2, a proving system that proves knowledge of openings of multiple commitments to zero within a single set, correct construction of a verifiable random function evaluated at each opening, and value balance across a separate list of commitments within a single proof. While soundness depends on a novel dual discrete-logarithm hardness assumption, we use data from the Monero blockchain to show that Triptych-2 can be used in a confidential transaction model to provide faster total batch verification time than other state-of-the-art constructions without a trusted setup.
Expand
Candelaria, Colombia, 23 September - 25 September 2020
Event Calendar Event Calendar
Event date: 23 September to 25 September 2020
Submission deadline: 11 May 2020
Notification: 3 July 2020
Expand
◄ Previous Next ►