International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

10 May 2021

Benny Applebaum, Eyal Golombek
ePrint Report ePrint Report
We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, $R$, of the verifier to be comparable with the number of bits, $C_V$, that the verifier sends during the interaction. We show that such \emph{randomness sparsification} is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization.

All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a \emph{cheating verifier}. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communication from the prover by $R-F$ bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of $2^{R-F}$. Here $F$ is a new measure of \emph{accessible bit complexity} of an HVZK proof system that ranges from 0 to $R$, where a maximal grade of $R$ is achieved when zero-knowledge holds against a ``semi-malicious'' verifier that maliciously selects its random tape and then plays honestly. Consequently, we show that some classical HVSZK proof systems, like the one for the complete Statistical-Distance problem (Sahai and Vadhan, JACM 2003) admit randomness sparsification with no penalty.

Along the way we introduce new notions of pseudorandomness against interactive proof systems, and study their relations to existing notions of pseudorandomness.
Expand
David Heath, Vladimir Kolesnikov, Stanislav Peceny
ePrint Report ePrint Report
A classic approach to MPC uses preprocessed multiplication triples to evaluate arbitrary Boolean circuits. If the target circuit features conditional branching, e.g. as the result of a IF program statement, then triples are wasted: one triple is consumed per AND gate, even if the output of the gate is entirely discarded by the circuit's conditional behavior.

In this work, we show that multiplication triples can be re-used across conditional branches. For a circuit with $b$ branches, each having $n$ AND gates, we need only a total of $n$ triples, rather than the typically required $b\cdot n$. Because preprocessing triples is often the most expensive step in protocols that use them, this significantly improves performance.

Prior work similarly amortized oblivious transfers across branches in the classic GMW protocol (Heath et al., Asiacrypt 2020, [HKP20]). In addition to demonstrating conditional improvements are possible for a different class of protocols, we also concretely improve over [HKP20]: their maximum improvement is bounded by the topology of the circuit. Our protocol yields improvement independent of topology: we need triples proportional to the size of the program's longest execution path, regardless of the structure of the program branches.

We implemented our approach in C++. Our experiments show that we significantly improve over a naive protocol and over prior work: for a circuit with $16$ branches and in terms of total communication, we improved over naive by $12\times$ and over [HKP20] by an average of $2.6\times$.

Our protocol is secure against the semi-honest corruption of $p-1$ parties.
Expand
Justin Kim, Vandan Mehta, Kartik Nayak, Nibesh Shrestha
ePrint Report ePrint Report
BFT protocols in the synchronous setting rely on a strong assumption: every message sent by a party will arrive at its destination within a known bounded time. To allow some degree of asynchrony while still tolerating a minority corruption, recently, in Crypto'19, a weaker synchrony assumption called mobile sluggish faults was introduced. In this work, we investigate the support for mobile sluggish faults in existing synchronous protocols such as Dfinity, Streamlet, Sync HotStuff, OptSync and the optimal latency BFT protocol. We identify key principles that can be used to ``compile'' these synchronous protocols to tolerate mobile sluggish faults.
Expand
Marten van Dijk, Deniz Gurevin, Chenglu Jin, Omer Khan, Phuong Ha Nguyen
ePrint Report ePrint Report
We provide a new remote attestation scheme for secure processor technology, which is secure in the presence of an All Digital State Observing (ADSO) adversary. To accomplish this, we obfuscate session signing keys using a silicon Physical Unclonable Function (PUF) with an extended interface that combines the LPN-PUF concept with a repetition code for small failure probabilities, and we introduce a new signature scheme that only needs a message dependent subset of a session signing key for computing a signature and whose signatures cannot be successfully forged even if one subset per session signing key leaks. Our solution for remote attestation shows that results computed by enclaves can be properly verified even when an ADSO-adversary is present. For $N=2^l$ sessions, implementation results show that signing takes $934.9+0.6\cdot l$ ms and produces a signature of $8.2+0.03\cdot l$ KB, and verification by a remote user takes $118.2+0.4\cdot l$ ms. During initialization, generation of all session keys takes $819.3 \cdot N$ ms and corresponding storage is $3 \cdot 10^{-5} + 0.12 \cdot N$ MB.
Expand
Hanshen Xiao, Srinivas Devadas
ePrint Report ePrint Report
We tackle the problems of private learning where an owner wishes to outsource a training task to an honest-but-curious server while keeping its data private, and private collaborative learning where two (or more) mutually distrusting owners outsource respective training data sets to an honest-but-curious server while keeping their data sets private from the server and each other.

The privacy property we provide is information-theoretic in nature, Probably Approximately Correct (PAC) approximation resistance (abbreviated to PAC security). Each owner transforms its data and labels using a private transform. The server combines samples from each data set into expanded samples with corresponding expanded labels -- we refer to this step as Task Augmentation. The server can be used for inference by any owner by sending it transformed samples. Unlike most prior approaches, our transformed data approach maintains privacy for each entity, even in the case where the server colludes with all other entities. Importantly, we show the utility of collaborative learning typically exceeds the utility that can be achieved by any entity restricted to its own data set.

Another important application we show is that the Task Augmentation approach can also be used in the single owner case by adding labeled, learnable noise to amplify privacy. This can be straightforwardly used to produce (Local) Differential Privacy ((L)DP) guarantees. We show that adding labeled noise as opposed to a conventional (L)DP additive noise mechanism significantly improves the privacy-utility tradeoff in private learning under the same setup.
Expand
Christian Porter, Andrew Mendelsohn, Cong Ling
ePrint Report ePrint Report
Whilst lattice-based cryptosystems are believed to be resistant to quantum attack, they are often forced to pay for that security with inefficiencies in implementation. This problem is overcome by ring- and module-based schemes such as Ring-LWE or Module-LWE, whose keysize can be reduced by exploiting its algebraic structure, allowing for neater and faster computations. Many rings may be chosen to define such cryptoschemes, but cyclotomic rings, due to their cyclic nature allowing for easy multiplication, are the community standard. However, there is still much uncertainty as to whether this structure may be exploited to an adversary's benefit. In this paper, we show that the decomposition group of a cyclotomic ring of arbitrary conductor may be utilised in order to significantly decrease the dimension of the ideal (or module) lattice required to solve a given instance of SVP. Moreover, we show that there exist a large number of rational primes for which, if the prime ideal factors of an ideal lie over primes of this form, give rise to an ``easy'' instance of SVP. However, it is important to note that this work does not break Ring-LWE or Module-LWE, since the security reduction is from worst case ideal or module SVP to average case Ring-LWE or Module-LWE respectively, and is one way.
Expand
Shravan Srinivasan, Alex Chepurnoy, Charalampos Papamanthou, Alin Tomescu, Yupeng Zhang
ePrint Report ePrint Report
We present Hyperproofs, the first vector commitment (VC) scheme that is efficiently maintainable and aggregatable. Similar to Merkle proofs, our proofs form a tree that can be efficiently maintained: updating all $n$ proofs in the tree after a single leaf change only requires $O(\log{n})$ time. Importantly, unlike Merkle proofs, Hyperproofs are efficiently aggregatable, anywhere from 10$\times$ to 100$\times$ faster than SNARK-based aggregation of Merkle proofs. At the same time, an individual Hyperproof consists of only $\log{n}$ algebraic hashes (e.g., 32-byte elliptic curve points) and an aggregation of $b$ such proofs is only $O(\log{(b\log{n})})$-sized. Hyperproofs are also reasonably fast to update when compared to Merkle trees with SNARK-friendly hash functions.

As another added benefit over Merkle trees, Hyperproofs are homomorphic: digests (and proofs) for two vectors can be homomorphically combined into a digest (and proofs) for their sum. Homomorphism is very useful in emerging applications such as stateless cryptocurrencies. First, it enables unstealability, a novel property that incentivizes proof computation. Second, it makes digests and proofs much more convenient to update.

Finally, Hyperproofs have certain limitations: they are not transparent, have linear-sized public parameters, are slower to verify, and have larger aggregated proofs than SNARK-based approaches. Nonetheless, end-to-end, aggregation and verification in Hyperproofs is 10$\times$ to 100$\times$ faster than SNARK-based Merkle trees.
Expand
Panagiotis Chatzigiannis, Konstantinos Chalkias
ePrint Report ePrint Report
A great challenge for distributed payment systems is their compliance with regulations, such as anti-money laundering, insolvency legislation, countering the financing of terrorism and sanctions laws. After Bitcoin's MtGox scandal, one of the most needed auditing functionalities for financial solvency and tax reporting purposes is to prove ownership of blockchain reserves, a process known as Proof of Assets (PoA). This work formalizes the PoA requirements in account-based blockchains, focusing on the unique hierarchical account structure of the Diem blockchain, formerly known as Libra. In particular, we take into account some unique features of the Diem infrastructure to consider different PoA modes by exploring time-stamping edge cases, cold wallets, locked assets, spending ability delegation and account pruning, among the others. We also propose practical optimizations to the byte-size of PoA in the presence of light clients who cannot run a full node, including skipping Validator updates, while still maintaining the 66.7% Byzantine fault tolerance (BFT) guarantee.
Expand
Rami Elkhatib, Reza Azarderakhsh, Mehran Mozaffari-Kermani
ePrint Report ePrint Report
Software implementations of cryptographic algorithms are slow but highly flexible and relatively easy to implement. On the other hand, hardware implementations are usually faster but provide little flexibility and require a lot of time to implement efficiently. In this paper, we develop a hybrid software-hardware implementation of the third round of Supersingular Isogeny Key Encapsulation (SIKE), a post-quantum cryptography algorithm candidate for NIST. We implement an isogeny field accelerator for the hardware and integrate it with a RISC-V processor which also acts as the main control unit for the field accelerator. The main advantage of this design is the high performance gain from the hardware implementation and the flexibility and fast development the software implementation provides. This is the first hybrid RISC-V and accelerator of SIKE. Furthermore, we provide one implementation for all NIST security levels of SIKE. Our design has the best area-time at NIST security levels 3 and 5 out of all hardware and hybrid designs provided in the literature.
Expand
Vanesa Daza, Abida Haque, Alessandra Scafuro, Alexandros Zacharakis, Arantxa Zapico
ePrint Report ePrint Report
Anonymous cryptographic primitives reduce the traces left by the users when interacting over a digital platform. However, they also prevent a platform owner to hold users accountable in case of malicious behaviour. Revocable anonymity offers a compromise by allowing only the manager (and not the other users) of the digital platform to de-anonymize user's activities when necessary. However, such de-anonymization power can be abused too, as a misbehaving manager can de-anonymize all the activities without user's awareness. Previous work propose to mitigate this issue by distributing the de-anonymization power across several entities. However, there is no comprehensive and formal treatment where both accountability and non-frameability (i.e., the inability to falsely accuse a party of misbehavior) for both the user and the manager are explicitly defined and provably achieved.

In this paper we formally define mutual accountability: a user can be held accountable for her otherwise anonymous digital actions and a manager is held accountable for every de-anonymization attempt; plus, no honest party can be framed -- regardless of what malicious parties do.

Instead of distributing the de-anonymization power across entities, instead, we decouple the power of de-anonymization from the power of monitoring de-anonymization attempts. This allows for greater flexibility, particularly in the choice of the monitoring entities.

We show that our framework can be instantiated generically from threshold encryption schemes and succinct non-interactive zero-knowledge. We also show that the highly-efficient threshold group signature scheme by Camenisch et al.(SCN'20) can be modified and extended to instantiate our framework.
Expand
Xuechao Wang, Viswa Virinchi Muppirala, Lei Yang, Sreeram Kannan, Pramod Viswanath
ePrint Report ePrint Report
Several emerging proof-of-work (PoW) blockchain protocols rely on a “parallel-chain” architecture for scaling, where instead of a single chain, multiple chains are run in parallel and aggregated. A key requirement of practical PoW blockchains is to adapt to mining power variations over time (Bitcoin’s total mining power has increased by a $10^14$ factor over the decade). In this paper, we consider the design of provably secure parallel-chain protocols which can adapt to such mining power variations.

The Bitcoin difficulty adjustment rule adjusts the difficulty target of block mining periodically to get a constant mean inter-block time. While superficially simple, the rule has proved itself to be sophisticated and successfully secure, both in practice and in theory [11, 13]. We show that natural adaptations of the Bitcoin adjustment rule to the parallel-chain case open the door to subtle, but catastrophic safety and liveness breaches. We uncover a meta-design principle that allow us to design variable mining difficulty protocols for three popular PoW blockchain proposals (Prism [3], OHIE [26], Fruitchains [21]) inside a common rubric.

The principle has three components: (M1) a pivot chain, based on which blocks in all chains choose difficulty, (M2) a monotonicity condition for referencing pivot chain blocks and (M3) translating additional protocol aspects from using levels (depth) to using “difficulty levels”. We show that protocols employing a subset of these principles may have catastrophic failures. The security of the designs is also proved using a common rubric – the key technical challenge involves analyzing the interaction between the pivot chain and the other chains, as well as bounding the sudden changes in difficulty target experienced in non-pivot chains. We empirically investigate the responsivity of the new mining difficulty rule via simulations based on historical Bitcoin data, and find that the protocol very effectively controls the forking rate across all the chains.
Expand
Zhelei Zhou, Xinlei Cao, Jian Liu, Bingsheng Zhang, Kui Ren
ePrint Report ePrint Report
Nowadays, neural networks have been widely used in many machine learning tasks. In practice, one might not have enough expertise to fine-tune a neural network model; therefore, it becomes increasingly popular to outsource the model training process to a machine learning expert. This activity brings out the needs of fair model exchange: if the seller sends the model first, the buyer might refuse to pay; if the buyer pays first, the seller might refuse to send the model or send an inferior model. In this work, we aim to address this problem so that neither the buyer nor the seller can deceive the other. We start from Zero Knowledge Contingent Payment (ZKCP), which is used for fair exchange of digital goods and payment over blockchain, and extend it to Zero Knowledge Contingent Model Payment (ZKCMP). We then instantiate our ZKCMP with two state-of-the-art NIZK proofs: zk-SNARKs and Libra. We also propose a random sampling technique to improve the efficiency of zk-SNARKs. We extensively conduct experiments to demonstrate the practicality of our proposal.
Expand
Shumo Chu, Danyang Zhuo, Elaine Shi, T-H. Hubert Chan (randomized author ordering)
ePrint Report ePrint Report
Numerous high-profile works have shown that access patterns to even encrypted databases can leak secret information and sometimes even lead to reconstruction of the entire database. To thwart access pattern leakage, the literature has focused on {\it oblivious} algorithms, where obliviousness requires that the access patterns leak nothing about the input data.

In this paper, we consider the {\tt Join} operator, an important database primitive that has been extensively studied and optimized. Unfortunately, any {\it fully oblivious} {\tt Join} algorithm would require {\it always} padding the result to the {\it worst-case} length which is {\it quadratic} in the data size $N$. In comparison, an insecure baseline incurs only $O(R + N)$ cost where $R$ is the true result length, and in the common case in practice, $R$ is relatively short. As a typical example, when $R = O(N)$, any fully oblivious algorithm must inherently incur a prohibitive, $N$-fold slowdown relative to the insecure baseline. Indeed, the (non-private) database and algorithms literature invariably focuses on studying the {\it instance-specific} rather than {\it worst-case} performance of database algorithms. Unfortunately, the stringent notion of full obliviousness precludes the design of efficient algorithms with non-trivial instance-specific performance.

To overcome this worst-case performance barrier of full obliviousness and enable algorithms with good instance-specific performance, we consider a relaxed notion of access pattern privacy called $(\epsilon, \delta)$-differential obliviousness (DO), originally proposed in the seminal work of Chan et al. (SODA'19). Rather than insisting that the access patterns leak no information whatsoever, the relaxed DO notion requires that the access patterns satisfy $(\epsilon, \delta)$-differential privacy. We show that by adopting the relaxed DO notion, we can obtain efficient database {\tt Join} mechanisms whose instance-specific performance {\it approximately matches} the insecure baseline, while still offering a meaningful notion of privacy to individual users.

Complementing our upper bound results, we also prove new lower bounds regarding the performance of any DO {\tt Join} algorithm.

Differential obliviousness (DO) is a new notion and is a relatively unexplored territory. Following the pioneering investigations by Chan et al. and others, our work is among the very first to formally explore how DO can help overcome the worst-case performance curse of full obliviousness; moreover, we motivate our work with database applications.

Our work shows new evidence why DO might be a promising notion, and opens up several exciting future directions.
Expand
Loïc Masure, Rémi Strullu
ePrint Report ePrint Report
In 2019, the ANSSI released a protected software implementation of AES running on an STM32 platform with ARM Cortex-M architecture, publicly available on Github. The release of the code was shortly followed by a first paper written by Bronchain et al. at Ches 2020, analyzing the security of the implementation and proposing some attacks. In order to propose fair comparisons for future attacks on this target device, this paper aims at presenting a new publicly available dataset, called ASCADv2 based on this implementation. Along with the dataset, we also provide a benchmark of deep learning based side-channel attacks, thereby extending the works of Bronchain et al. Our attacks revisit and leverage the multi-task learning approach, introduced by Maghrebi in 2020, in order to efficiently target several intermediate computations at the same time. We hope that this work will draw the community’s interest towards the evaluation of highly protected software AES, whereas some of the current public SCA datasets are nowadays reputed to be less and less challenging.
Expand
Jan Peter Drees, Pritha Gupta, Eyke Hüllermeier, Tibor Jager, Alexander Konze, Claudia Priesterjahn, Arunselvan Ramaswamy, Juraj Somorovsky
ePrint Report ePrint Report
Currently most practical attacks on cryptographic protocols like TLS are based on side channels, such as padding oracles. Some well-known recent examples are DROWN, ROBOT and Raccoon (USENIX Security 2016, 2018, 2021). Such attacks are usually found by careful and time-consuming manual analysis by specialists. In this paper, we consider the question of how such attacks can be systematically detected and prevented before (large-scale) deployment. We propose a new, fully automated approach, which uses supervised learning to identify arbitrary patterns in network protocol traffic. In contrast to classical scanners, which search for known side channels, the detection of general patterns might detect new side channels, even “unexpected” ones, such as those from the ROBOT attack. To analyze this approach, we develop a tool to detect Bleichenbacher-like padding oracles in TLS server implementations, based on an ensemble of machine learning algorithms. We verify that the approach indeed detects known vulnerabilities successfully and reliably. The tool also provides detailed information about detected patterns to developers, to assist in removing a potential padding oracle. Due to the automation, the approach scales much better than manual analysis and could even be integrated with a CI/CD pipeline of a development environment, for example.
Expand
Carla Ràfols, Arantxa Zapico
ePrint Report ePrint Report
We introduce Checkable Subspace Sampling Arguments, a new information theoretic interactive proof system in which the prover shows that a vector has been sampled in a subspace according to the verifier's coins. We show that this primitive provides a unifying view that explains the technical core of most of the constructions of universal and updatable pairing-based (zk)SNARKs. This characterization is extended to a fully algebraic framework for designing such SNARKs in a modular way, which leads to a new construction that is more efficient than the state-of-the-art in several dimensions.
Expand
Hidenori Kuwakado, Shoichi Hirose, Masahiro Mambo
ePrint Report ePrint Report
White-box cryptography is often used in embedded applications. Although white-box cryptography with provable security has been proposed recently, the circuit size is much larger than that of usual block ciphers. We address this problem in a different way from previous works. In particular, we propose a white-box symmetric cipher using quantum memory. The size of our cipher is a polynomial in input-length and output-length of an underlying function. The security against classical attacks is reduced to the security of the underlying classical pseudo-random function. We show that quantum attacks using the generalized Grover algorithm to our cipher are ineffective.
Expand
Thomas Haines, Johannes Mueller
ePrint Report ePrint Report
Shuffling is one of the most important techniques for privacy-preserving protocols. Its applications are manifold, including, for example, e-voting, anonymous broadcast, or privacy-preserving machine-learning. For many applications, such as secure e-voting, it is crucial that the correctness of the shuffling operation be (publicly) verifiable. To this end, numerous proofs of shuffle have been proposed in the literature. Several of these proofs are actually employed in the real world.

In this work, we propose a generic compiler which can transform any "shuffle-compatible" Sigma-protocol (including, among others, Sigma-protocols for re-randomization, decryption, or key shifting) into a Sigma-protocol for permutations of the underlying relation. The resulting proof of shuffle is black-box, easily implementable, simple to explain, and comes with an acceptable computational overhead over the state-of-the-art. Because we machine-checked our compiler in Coq, the new proof of shuffle is particularly suitable for applications that require a superior level of security assurance (e.g., high-stake elections).
Expand
David Heath, Vladimir Kolesnikov
ePrint Report ePrint Report
We construct a concretely efficient Zero Knowledge (ZK) Oblivious RAM (ORAM) that consumes $2 \log n$ oblivious transfers (OTs) of length-$2\sigma$ secrets per access of an arithmetic value, for statistical security parameter $\sigma$ and array size $n$. This is an asymptotic and concrete improvement over previous best (concretely efficient) ZK ORAM Bub- bleRAM of Heath and Kolesnikov ([HK20a], CCS 2020), whose access cost is $1/2 \log^2 n$ OTs of length-$2\sigma$ secrets.

ZK ORAM is essential for proving statements that are best expressed as RAM programs, rather than Boolean or arithmetic circuits.

Our construction is private-coin ZK. We integrate it with [HK20a]’s ZK Proof (ZKP) protocol and prove the resulting ZKP system secure.

We implemented PrORAM in C++. Compared to the state-of-the-art BubbleRAM, our PrORAM is $~10\times$ faster for arrays of size $2^{20}$ of $40$-bit values.
Expand
Laila El Aimani
ePrint Report ePrint Report
We consider the problem of finding low-weight multiples of polynomials over binary fields; a problem which arises in stream cipher cryptanalysis or in finite field arithmetic. We first devise memory- efficient algorithms based on the recent advances in techniques for solving the knapsack problem. Then, we tune our algorithms using the celebrated Parallel Collision Search (PCS) method to decrease the time cost at the expense of a slight increase in space. Both our memory-efficient and time-memory trade-off algorithms improve substantially the state-of-the-art.
Expand
◄ Previous Next ►