International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

18 March 2020

Yibin Xu, Yangyu Huang
ePrint Report ePrint Report
Traditional Blockchain Sharding approaches can only tolerate up to n/3 of nodes being adversary because they rely on the hypergeometric distribution to make a failure (an adversary does not have n/3 of nodes globally but can manipulate the consensus of a Shard) hard to happen. The system must maintain a large Shard size (the number of nodes inside a Shard) to sustain the low failure probability so that only a small number of Shards may exist. In this paper, we present a new approach of Blockchain Sharding that can withstand up to n/2 of nodes being bad. We categorise the nodes into different classes, and every Shard has a fixed number of nodes from different classes. We prove that this design is much more secure than the traditional models (only have one class) and the Shard size can be reduced significantly. In this way, many more Shards can exist, and the transaction throughput can be largely increased. The improved Blockchain Sharding approach is promising to serve as the foundation for decentralised autonomous organisations and decentralised database.
Expand
Christof Beierle, Gregor Leander
ePrint Report ePrint Report
We consider $n$-bit permutations with differential uniformity of 4 and null nonlinearity. We first show that the inverses of Gold functions have the interesting property that one component can be replaced by a linear function such that it still remains a permutation. This directly yields a construction of 4-uniform permutations with trivial nonlinearity in odd dimension. We further show their existence for all $n = 3$ and $n \geq 5$ based on a construction in [1]. In this context, we also show that 4-uniform 2-1 functions obtained from admissible sequences, as defined by Idrisova in [8], exist in every dimension $n = 3$ and $n \geq 5$. Such functions fulfill some necessary properties for being subfunctions of APN permutations. Finally, we use the 4-uniform permutations with null nonlinearity to construct some 4-uniform 2-1 functions from $\mathbb{F}_2^n$ to $\mathbb{F}_2^{n-1}$ which are not obtained from admissible sequences. This disproves a conjecture raised by Idrisova.
Expand
Wulu Li, Yongcan Wang, Lei Chen, Xin Lai, Xiao Zhang, Jiajun Xin
ePrint Report ePrint Report
Linkable ring signature (LRS) plays a major role in the Monero-type cryptocurrencies, as it provides the anonymity of initiator and the prevention of double spending in transactions. In this paper, we propose SLRS: a simpler and modular construction of linkable ring signature scheme, which only use standard ring signature as component, without any additional one-time signatures or zero-knowledge proofs. SLRS is more efficient than existing schemes in both generation and verification. Moreover, we use SLRS to construct an efficient and compact position-preserving linkable multi-ring signature to support application in Monero-type cryptocurrencies. We also give the security proofs, implementation as well as the performance comparisons between SLRS, Ring-CT and Ring-CT 3.0 in size and efficiency.
Expand
Vidal Attias, Luigi Vigneri, Vassil Dimitrov
ePrint Report ePrint Report
Proof of Work is a prevalent mechanism to prove investmentof time in blockchain projects. However the use of massive parallelismand specialized hardware gives an unfair advantage to a small portion ofnodes and raises environmental and economical concerns. In this paperwe provide an implementation study of two Verifiable Delay Functions, anew cryptographic primitive achieving Proof of Work goals in an unpar-allelizable way. We provide simulation results and an optimization basedon a multiexponentiation algorithm.
Expand

17 March 2020

Sergey Agievich
ePrint Report ePrint Report
In the modified CTR (CounTeR) mode known as CTR2, nonces are encrypted before constructing sequences of counters from them. This way we have only probabilistic guarantees for non-overlapping of the sequences. We show that these guarantees, and therefore the security guarantees of CTR2, are strong enough in two standard scenarios: random nonces and non-repeating nonces. We also show how to extend CTR2 to an authenticated encryption mode which we call CHE (Counter-Hash-Encrypt). To extend, we use one invocation of polynomial hashing and one additional block encryption.
Expand
Gil Segev, Ido Shahaf
ePrint Report ePrint Report
The hardness of highly-structured computational problems gives rise to a variety of public-key primitives. On one hand, the structure exhibited by such problems underlies the basic functionality of public-key primitives, but on the other hand it may endanger public-key cryptography in its entirety via potential algorithmic advances. This subtle interplay initiated a fundamental line of research on whether structure is inherently necessary for cryptography, starting with Rudich's early work (PhD Thesis '88) and recently leading to that of Bitansky, Degwekar and Vaikuntanathan (CRYPTO '17).

Identifying the structure of computational problems with their corresponding complexity classes, Bitansky et al. proved that a variety of public-key primitives (e.g., public-key encryption, oblivious transfer and even functional encryption) cannot be used in a black-box manner to construct either any hard language that has $\mathsf{NP}$-verifiers both for the language itself and for its complement, or any hard language (and even promise problem) that has a statistical zero-knowledge proof system -- corresponding to hardness in the structured classes $\mathsf{NP} \cap \mathsf{coNP}$ or $\mathsf{SZK}$, respectively, from a black-box perspective.

In this work we prove that the same variety of public-key primitives do not inherently require even very little structure in a black-box manner: We prove that they do not imply any hard language that has multi-prover interactive proof systems both for the language and for its complement -- corresponding to hardness in the class $\mathsf{MIP} \cap \mathsf{coMIP}$ from a black-box perspective. Conceptually, given that $\mathsf{MIP} = \mathsf{NEXP}$, our result rules out languages with very little structure.

Already the cases of languages that have $\mathsf{IP}$ or $\mathsf{AM}$ proof systems both for the language itself and for its complement, which we rule out as immediate corollaries, lead to intriguing insights. For the case of $\mathsf{IP}$, where our result can be circumvented using non-black-box techniques, we reveal a gap between black-box and non-black-box techniques. For the case of $\mathsf{AM}$, where circumventing our result via non-black-box techniques would be a major development, we both strengthen and unify the proofs of Bitansky et al. for languages that have $\mathsf{NP}$-verifiers both for the language itself and for its complement and for languages that have a statistical zero-knowledge proof system.
Expand
Gabrielle De Micheli, Pierrick Gaudry, Cécile Pierrot
ePrint Report ePrint Report
We study the discrete logarithm problem at the boundary case between small and medium characteristic finite fields, which is precisely the area where finite fields used in pairing-based cryptosystems live. In order to evaluate the security of pairing-based protocols, we thoroughly analyze the complexity of all the algorithms that coexist at this boundary case: the Quasi-Polynomial algorithms, the Number Field Sieve and its many variants, and the Function Field Sieve. We adapt the latter to the particular case where the extension degree is composite, and show how to lower the complexity by working in a shifted function field. All this study finally allows us to give precise values for the characteristic asymptotically achieving the highest security level for pairings. Surprisingly enough, there exist special characteristics that are as secure as general ones.
Expand
Simon Holmgaard Kamp, Bernardo Magri, Christian Matt, Jesper Buus Nielsen, Søren Eller Thomsen, Daniel Tschudi
ePrint Report ePrint Report
Existing Nakamoto-style blockchains (NSBs) rely on some sort of synchrony assumption to offer any type of safety guarantees. A basic requirement is that when a party produces a new block, then all previously produced blocks should be known to that party, as otherwise the new block might not append the current head of the chain, creating a fork. In practice, however, the network delay for parties to receive messages is not a known constant, but rather varies over time. The consequence is that the parameters of the blockchain need to be set such that the time between the generation of two blocks is typically larger than the network delay (e.g., $10$ minutes in Bitcoin) to guarantee security even under bad network conditions. This results in lost efficiency for two reasons: (1) Since blocks are produced less often, there is low throughput. Furthermore, (2) blocks can only be considered final, and thus the transactions inside confirmed, once they are extended by sufficiently many other blocks, which incurs a waiting time that is a multiple of 10 minutes. This is true even if the actual network delay is only $1$ second, meaning that NSBs are slow even under good network conditions.

We show how the Bitcoin protocol can be adjusted such that we preserve Bitcoin's security guarantees in the worst case, and in addition, our protocol can produce blocks arbitrarily fast and achieve optimistic responsiveness. The latter means that in periods without corruption, the confirmation time only depends on the (unknown) actual network delay instead of the known upper bound. Technically, we propose an approach where blocks are treated differently in the ``longest chain rule''. The crucial parameter of our protocol is a weight function assigning different weight to blocks according to their hash value. We present a framework for analyzing different weight functions, in which we prove all statements at the appropriate level of abstraction. This allows us to quickly derive protocol guarantees for different weight functions. We exemplify the usefulness of our framework by capturing the classical Bitcoin protocol as well as exponentially growing functions as special cases, where the latter provide the above mentioned guarantees, including optimistic responsiveness.
Expand
Anita John, Rohit Lakra, Jimmy Jose
ePrint Report ePrint Report
Cellular Automata (CA) have recently evolved as a good cryptographic primitive. It plays an important role in the construction of new fast, efficient and secure stream ciphers. Several studies have been made on CA based stream ciphers and we observe that the cryptographic strength of a CA based stream cipher increases with the increase in the neighbourhood radii if appropriate CA rules are employed. The current work explores the cryptographic feasibility of 5-neighbourhood CA rules also referred to as pentavalent rules. A new CA based stream cipher, CARPenter, which uses pentavalent rules have been proposed. The cipher incorporates maximum length null-boundary linear CA and a non-linear CA along with a good non-linear mixing function. This is implemented in hardware as well as software and exhibits good cryptographic properties which makes the cipher resistant to almost all attacks on stream ciphers, but with the cost of additional computing requirements. This cipher uses 16 cycles for initialization, which is the least number of cycles when compared to other existing stream ciphers.
Expand
John M. Schanck
ePrint Report ePrint Report
We give a new proof that the decryption failure rate of NewHope512 is at most $2^{-398.8}$. As in previous work, this failure rate is with respect to random, honestly generated, secret key and ciphertext pairs. However, our technique can also be applied to a fixed secret key. We demonstrate our technique on some subsets of the NewHope1024 key space, and we identify a large subset of NewHope1024 keys with failure rates of no more than $2^{-439.5}$.
Expand
Robert Muth, Florian Tschorsch
ePrint Report ePrint Report
Blockchain technologies enable decentralized applications (DApps) that run on distributed infrastructures without any central authority. All transactions for communication and data storage are public and can be verified by all participants. DApps interacting with a smart contract typically require client-side code, which is not part of the smart contract, and therefore do not hold the same verifiability properties. Following the vision of a verifiable DApp, we propose SmartDHX, a Diffie-Hellman key exchange (DHKE) scheme, fully implemented as a smart contract. That is, SmartDHX communicates only via the Ethereum blockchain and provides both backend and client-side code with the smart contract. The application code can therefore be verified and deployed without external trust requirements. By executing DHKE on-chain, we gain a number of properties, including asynchronicity as well as message integrity and authenticity. We generalize the two-party SmartDHX to emphasize that our approach is able to handle complex cryptographic protocols. In our analysis, we expose an efficiency tradeoff when executed on chain. In particular, we provide a proof-of-concept implementation and analyze the runtime and transaction fees. Since DHKE is used by many cryptographic algorithms, SmartDHX contributes a fundamental building block in the domain of DApps.
Expand
Bicky Shakya, Xiaolin Xu, Mark Tehranipoor, Domenic Forte
ePrint Report ePrint Report
Recently, a logic locking approach termed `CAS-Lock' was proposed to simultaneously counter Boolean satisfiability (SAT) and bypass attacks. The technique modifies the AND/OR tree structure in Anti-SAT to achieve non-trivial output corruptibility while maintaining resistance to both SAT and bypass attacks. An attack against CAS-Lock (dubbed `CAS-Unlock') was also recently proposed on a naive implementation of CAS-Lock. It relies on setting key values to all 1's or 0's to break CAS-Lock. In this short paper, we evaluate this attack's ineffectiveness and describe a misinterpretation of CAS-Lock's implementation.
Expand
Yibin Xu, Yangyu Huang, Jianhua Shao, George Theodorakopoulos
ePrint Report ePrint Report
Blockchain sharding is a promising approach to solving the dilemma between decentralisation and high performance (transaction throughput) for blockchain. The main challenge of Blockchain sharding systems is how to reach a decision on a statement among a sub-group (shard) of people while ensuring the whole population recognises this statement. Namely, the challenge is to prevent an adversary who does not have the majority of nodes globally but have the majority of nodes inside a shard. Most Blockchain sharding approaches can only reach a correct consensus inside a shard with at most $n/3$ evil nodes in a $n$ node system. There is a blockchain sharding approach which can prevent an incorrect decision to be reached when the adversary does not have $n/2$ nodes globally. However, the system can be stopped from reaching consensus (become deadlocked) if the adversary controls a smaller number of nodes.

In this paper, we present an improved Blockchain sharding approach that can withstand $n/2$ adversarial nodes and recover from deadlocks. The recovery is made by dynamically adjusting the number of shards and the shard size. A performance analysis suggests our approach has a high performance (transaction throughput) while requiring little bandwidth for synchronisation.
Expand
Andrew Loveless, Ronald Dreslinski, Baris Kasikci
ePrint Report ePrint Report
Multi-valued Byzantine Consensus (BC), in which $n$ processes must reach agreement on a single $L$-bit value, is an essential primitive in the design of distributed cryptographic protocols and fault-tolerant distributed systems. One of the most desirable traits for a multi-valued BC protocol is to be error-free. In other words, have zero probability of producing incorrect results.

The most efficient error-free multi-valued BC protocols are built as extension protocols, which reduce agreement on large values to agreement on small sequences of bits whose lengths are independent of $L$. The best extension protocols achieve $\mathcal{O}(Ln)$ communication complexity, which is optimal, when $L$ is large relative to $n$. Unfortunately, all known error-free and communication-optimal BC extension protocols require each process to broadcast at least $n$ bits with a binary Byzantine Broadcast (BB) protocol. This design limits the scalability of these protocols to many processes, since when $n$ is large, the binary broadcasts significantly inflate the overall number of bits communicated by the extension protocol.

In this paper, we present Byzantine Consensus with Parallel Execution (BCPE), the first error-free and communication-optimal BC extension protocol in which each process only broadcasts a single bit with a binary BB protocol. BCPE is a synchronous and deterministic protocol, and tolerates $f < n/3$ faulty processes (the best resilience possible). Our evaluation shows that BCPE's design makes it significantly more scalable than the best existing protocol by Ganesh and Patra. For 1,000 processes to agree on 2 MB of data, BCPE communicates $10.92\times$ fewer bits. For agreement on 10 MB of data, BCPE communicates $6.97\times$ fewer bits. BCPE also matches the best existing protocol in all other standard efficiency metrics.
Expand
Jose Maria Bermudo Mera, Furkan Turan, Angshuman Karmakar, Sujoy Sinha Roy, Ingrid Verbauwhede
ePrint Report ePrint Report
We present a domain-specific co-processor to speed up Saber, a post-quantum key encapsulation mechanism competing on the NIST Post-Quantum Cryptography standardization process. Contrary to most lattice-based schemes, Saber doesn’t use NTT-based polynomial multiplication. We follow a hardware-software co-design approach: the execution is performed on an ARM core and only the most computationally expensive operation, i.e., polynomial multiplication, is offloaded to the co-processor to obtain a compact design. We exploit the idea of distributed computing at micro-architectural level together with novel algorithmic optimizations to achieve approximately a 6 times speedup with respect to optimized software at a small area cost.
Expand

15 March 2020

Michel Abdalla, Manuel Barbosa, Tatiana Bradley, Stanislaw Jarecki, Jonathan Katz, Jiayu Xu
ePrint Report ePrint Report
Protocols for password authenticated key exchange (PAKE) allow two parties who share only a weak password to agree on a cryptographically strong key. We revisit the notion of PAKE in the framework of universal composability, and propose a relaxation of the PAKE functionality of Canetti et al. that we call lazy-extraction PAKE (lePAKE). Roughly, our relaxation allows the ideal-world adversary to postpone its password guess even until after a session is complete. We argue that this relaxed notion still provides meaningful security in the password-only setting. As our main result, we show that several PAKE protocols that were previously only proven secure with respect to a "game-based" definition can in fact be shown to realize the lePAKE functionality in the random-oracle model. These include SPEKE, SPAKE2, and TBPEKE, the most efficient PAKE schemes currently known.
Expand
Hayim Shaul, Dan Feldman, Daniela Rus
ePrint Report ePrint Report
The $k$-nearest neighbors ($k$NN) classifier predicts a class of a query, $q$, by taking the majority class of its $k$ neighbors in an existing (already classified) database, $S$. In secure $k$NN, $q$ and $S$ are owned by two different parties and $q$ is classified without sharing data. In this work we present a classifier based on $k$NN, that is more efficient to implement with homomorphic encryption (HE). The efficiency of our classifier comes from a relaxation we make to consider $\kappa$ nearest neighbors for $\kappa\approx k$ with probability that increases as the statistical distance between Gaussian and the distribution of the distances from $q$ to $S$ decreases. We call our classifier $k$-ish Nearest Neighbors ($k$-ish NN). For the implementation we introduce {\em double-blinded coin-toss} where the bias and output of the toss are encrypted. We use it to approximate the average and variance of the distances from $q$ to $S$ in a scalable circuit whose depth is independent of $|S|$. We believe these to be of independent interest. We implemented our classifier in an open source library based on HElib and tested it on a breast tumor database. Our classifier has accuracy and running time comparable to current state of the art (non-HE) MPC solution that have better running time but worse communication complexity. It also has communication complexity similar to naive HE implementation that have worse running time.
Expand
Huijia Lin, Ji Luo
ePrint Report ePrint Report
We present a new general framework for constructing compact and adaptively secure attribute-based encryption (ABE) schemes from $k$-Lin in asymmetric bilinear pairing groups. Previously, the only construction [Kowalczyk and Wee, Eurocrypt '19] that simultaneously achieves compactness and adaptive security from static assumptions supports policies represented by Boolean formulae. Our framework enables supporting more expressive policies represented by arithmetic branching programs.

Our framework extends to ABE for policies represented by uniform models of computation such as Turing machines. Such policies enjoy the feature of being applicable to attributes of arbitrary lengths. We obtain the first compact adaptively secure ABE for deterministic and non-deterministic finite automata (DFA and NFA) from $k$-Lin, previously unknown from any static assumptions. Beyond finite automata, we obtain the first ABE for large classes of uniform computation, captured by deterministic and non-deterministic logspace Turing machines (the complexity classes $\mathsf{L}$ and $\mathsf{NL}$) based on $k$-Lin. Our ABE scheme has compact secret keys of size linear in the description size of the Turing machine $M$. The ciphertext size grows linearly in the input length, but also linearly in the time complexity, and exponentially in the space complexity. Irrespective of compactness, we stress that our scheme is the first that supports large classes of Turing machines based solely on standard assumptions. In comparison, previous ABE for general Turing machines all rely on strong primitives related to indistinguishability obfuscation.
Expand
Archisman Ghosh, Debayan Das, Shreyas Sen
ePrint Report ePrint Report
Mathematically-secure cryptographic algorithms leak significant side-channel information through their power supplies when implemented on a physical platform. These side-channel leakages can be exploited by an attacker to extract the secret key of an embedded device. The existing state-of-the-art countermeasures mainly focus on the power balancing, gate-level masking, or signal-to-noise (SNR) reduction using noise injection and signature attenuation, all of which suffer either from the limitations of high power/area overheads, performance degradation or are not synthesizable. In this article, we propose a generic low-overhead digital-friendly power SCA countermeasure utilizing physical Time-Varying Transfer Functions (TVTF) by randomly shuffling distributed switched capacitors to significantly obfuscate the traces in the time domain. System-level simulation results of the TVTF-AES implemented in TSMC 65nm CMOS technology show > 4000x MTD improvement over the unprotected implementation with ~ 1.25x power and ~ 1.2x area overheads, and without any performance degradation.
Expand
Rishab Goyal, Sam Kim, Brent Waters, David J. Wu
ePrint Report ePrint Report
A software watermarking scheme provides a method to prevent unauthorized distribution of software. Specifically, watermarking schemes allow a user to embed an identifier into a piece of code such that the resulting program is nearly functionally-equivalent to the original program, and yet, it is difficult to remove the identifier without destroying the functionality of the program. Existing constructions of watermarking have focused primarily on watermarking pseudorandom functions (PRFs).

In this work, we revisit the definitional foundations of watermarking, and begin by highlighting a major limitation in the existing security notion. Currently, the security requirement in watermarking schemes only requires that the identifier be successfully extracted from programs that preserve the exact input/output behavior of the original program. In the context of PRFs, this means that an adversary that constructs a program which computes a quarter of the output bits of the PRF or that is able to distinguish the outputs of the PRF from random are considered to be outside the threat model. However, in any application (e.g., watermarking a decryption device or an authentication token) that relies on PRF security, an adversary that manages to predict a quarter of the bits or distinguishes the PRF outputs from random would be considered to have defeated the scheme. Thus, existing watermarking schemes provide very little, if any, security guarantee against realistic adversaries. We emphasize that none of the existing constructions of watermarkable PRFs would be able to extract the identifier from a program that only outputs a quarter of the bits of the PRF or one that perfectly distinguishes.

To address the shortcomings in existing watermarkable PRF definitions, we introduce a new primitive called a traceable PRF. Our definitions are inspired by similar definitions from public-key traitor tracing, and aim to capture a very robust set of adversaries: namely, any adversary that produces a useful distinguisher (i.e., a program that can break PRF security), can be traced to a specific identifier. We provide a general framework for constructing traceable PRFs via an intermediate primitive called private linear constrained PRFs. Finally, we show how to construct traceable PRFs from a similar set of assumptions previously used to realize software watermarking. Namely, we obtain a single-key traceable PRF from standard lattice assumptions and a fully collusion-resistant traceable PRF from indistinguishability obfuscation (together with injective one-way functions).
Expand
◄ Previous Next ►