International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

29 August 2019

Abdelrahaman Aly, Emmanuela Orsini, Drago Rotaru, Nigel P. Smart, Tim Wood
ePrint Report ePrint Report
We present modifications to the MPC system SCALE-MAMBA to enable the evaluation of garbled circuit (GC) based MPC functionalities and Linear Secret Sharing (LSSS) based MPC functionalities along side each other. This allows the user to switch between different MPC paradigms to achieve the best performance. To do this we present modifications to the GC-based MPC protocol of Hazay et al. (Asiacrypt 2017) (to enable it to support reactive computation), and combine different aspects of their pre-processing phase with those of Wang et al. (CCS 2017), in order to optimize our pre-processing protocols. We also give a more efficient method for producing daBits (double authenticated Bits) than that presented in the work of Rotaru and Wood (ePrint 2019). Finally, we examine how the functionality can be integrated within the existing MPC framework SCALE-MAMBA
Expand
Ngoc Khanh Nguyen
ePrint Report ePrint Report
Recently, Lyubashevsky & Seiler (Eurocrypt 2018) showed that small polynomials in the cyclotomic ring $Z_q[X]/(X^n+1)$, where $n$ is a power of two, are invertible under special congruence conditions on prime modulus $q$. This result has been used to prove certain security properties of lattice-based constructions against unbounded adversaries. Unfortunately, due to the special conditions, working over the corresponding cyclotomic ring does not allow for efficient use of the Number Theoretic Transform (NTT) algorithm for fast multiplication of polynomials and hence, the schemes become less practical.

In this paper, we present how to overcome this limitation by analysing zeroes in the Chinese Remainder (or NTT) representation of small polynomials. As a result, we provide upper bounds on the probabilities related to the (non)-existence of a short vector in a random module lattice with no assumptions on the prime modulus. We apply our results, along with the generic framework by Kiltz et al. (Eurocrypt 2018), to a number of lattice-based Fiat-Shamir signatures so they can both enjoy tight security in the quantum random oracle model and support fast multiplication algorithms (at the cost of slightly larger public keys and signatures), such as the Bai-Galbraith signature scheme (CT-RSA 2014), Dilithium-QROM (Kiltz et al., Eurocrypt 2018) and qTESLA (Alkim et al., PQCrypto 2017). Our techniques can also be applied to prove that recent commitment schemes by Baum et al. (SCN 2018) are statistically binding with no additional assumptions on $q$.
Expand
Wenping MA
ePrint Report ePrint Report
A hash function family is called correlation intractable if for all sparse relations, it hard to find, given a random function from the family, an input output pair that satisfies the relation. Correlation intractability (CI) captures a strong Random Oracle like property of hash functions. In particular, when security holds for all sparse relations, CI suffices for guaranteeing the soundness of the Fiat-Shamir transformation from any constant round, statistically sound interactive proof to a non-interactive argument. In this paper, based on the method proposed by Chris Peikert and Sina Shiehian, we construct a hash family that is computationally correlation intractable for any polynomially bounded size circuits based on Learning with Errors Over Rings (RLWE) with polynomial approximation factors and Short Integer Solution problem over modules (MSIS), and a hash family that is somewhere statistically intractable for any polynomially bounded size circuits based on RLWE. Similarly, our construction combines two novel ingredients: a correlation intractable hash family for log depth circuits based on RLWE, and a bootstrapping transform that uses leveled fully homomorphic encryption (FHE) to promote correlation intractability for the FHE decryption circuit on arbitrary circuits. Our construction can also be instantiated in two possible modes, yielding a NIZK that is either computationally sound and statistically zero knowledge in the common random string model, or vice-versa in common reference string model. The proposed scheme is much more efficient.
Expand
Nadim Kobeissi
ePrint Report ePrint Report
Contemporary research in symbolic formal verification has led to confirming security guarantees (as well as finding attacks) in secure channel protocols such as TLS and Signal. However, formal verification in general has not managed to significantly exit the academic bubble. Verifpal is new software for verifying the security of cryptographic protocols that aims is to work better for real-world practitioners, students and engineers without sacrificing comprehensive formal verification features. In order to achieve this, Verifpal introduces a new, intuitive language for modeling protocols that is easier to write and understand than the languages employed by existing tools. Its formal verification paradigm is also designed explicitly to provide protocol modeling that avoids user error. By modeling principals explicitly and with discrete states, Verifpal models are able to be written in a way that reflects how protocols are described in the real world. At the same time, Verifpal is able to model protocols under an active attacker with unbounded sessions and fresh values, and supports queries for advanced security properties such as forward secrecy or key compromise impersonation. Verifpal has already been used to verify security properties for Signal, Scuttlebutt, TLS 1.3 and other protocols. It is a community-focused project, and available under a GPLv3 license.
Expand
Xinyu Li, Jing Xu, Xiong Fan, Yuchen Wang, Zhenfeng Zhang
ePrint Report ePrint Report
Proof-of-stake (PoS) blockchain protocols are emerging as one of the most promising alternative to the energy-consuming proof-of-work protocols. However, one particularly critical threat in the PoS setting is the well-known long-range attacks caused by secret key leakage (LRSL attack). Specifically, an adversary can attempt to corrupt the secret keys corresponding to accounts possessing substantial stake at some past moment such that double-spend or erase past transactions, violating the fundamental persistence property of blockchain. Puncturable signatures, introduced by Bellare et al. (Eurocrypt 2016), provide a satisfying solution to construct practical proof-of-stake blockchain protocols resilient to LRSL attack, despite of the fact that existent constructions are not efficient enough for practical deployments.

In this paper, we provide a systematic study of puncturable signatures and explore its applications in proof-of-stake blockchain protocol. The puncturing functionality we desire is for a particular part of message, like prefix, instead of the whole message. We formalize a security model that allows adversary for adaptive signing and puncturing queries, and show a construction with efficient puncturing operation based on Bloom filter data structure and strong Diffie-Hellman assumption. In order to further improve efficiency of puncturing, we introduce another primitive, called tag-based puncturable signature and present a generic construction based on hierarchical identity based signature scheme. Finally, we use puncturable signature to construct practical proof-of-stake blockchain protocols that are resilient to LRSL attack, while previously forward secure signature is used to immunize this attack. We implement our scheme and provide experimental results showing that in comparison with forward secure signatures, our constructions of puncturable signature perform substantially better on signature size, signing and verification efficiency, significantly on key update efficiency.
Expand
Russell W. F. Lai, Giulio Malavolta, Viktoria Ronge
ePrint Report ePrint Report
In their celebrated work, Groth and Sahai [EUROCRYPT'08, SICOMP' 12] constructed non-interactive zero-knowledge (NIZK) proofs for general bilinear group arithmetic relations, which spawned the entire subfield of structure-preserving cryptography. This branch of the theory of cryptography focuses on modular design of advanced cryptographic primitives. Although the proof systems of Groth and Sahai are a powerful toolkit, their efficiency hits a barrier when the size of the witness is large, as the proof size is linear in that of the witness.

In this work, we revisit the problem of proving knowledge of general bilinear group arithmetic relations in zero-knowledge. Specifically, we construct a succinct zero-knowledge argument for such relations, where the communication complexity is logarithmic in the integer and source group components of the witness. Our argument has public-coin setup and verifier and can therefore be turned non-interactive using the Fiat-Shamir transformation in the random oracle model. For the special case of non-bilinear group arithmetic relations with only integer unknowns, our system can be instantiated in non-bilinear groups. In many applications, our argument system can serve as a drop-in replacement of Groth-Sahai proofs, turning existing advanced primitives in the vast literature of structure-preserving cryptography into practically efficient systems with short proofs.
Expand
William Black, Ryan Henry
ePrint Report ePrint Report
We present a new 4-move special honest-verifier zero-knowledge proof of knowledge system for proving that a vector of Pedersen commitments opens to a so-called "one-hot" vector (i.e., to a vector from the standard orthonormal basis) from $\mathbb{Z}_p^n$. The need for such proofs arises in the contexts of symmetric private information retrieval (SPIR), end-to-end verifiable voting (E2E), and privacy-preserving data aggregation and analytics, among others. The key insight underlying the new protocol is a simple observation regarding the paucity of roots of polynomials of bounded degree over a finite field. The new protocol is fast and yields succinct proofs: For vectors of length $n$, the prover evaluates $\Theta(\lg{n})$ group operations plus $\Theta(n)$ field operations and sends just $\Theta(\lg{n})$ group and field elements, while the verifier evaluates one $n$-base multiexponentiation plus $\Theta(\lg{n})$ additional group operations and sends just $2(\lambda+\lg{n})$ bits to obtain a soundness error less than $2^{-\lambda}$. (A 5-move variant of the protocol reduces prover upload to just $2\lambda+\lg{n}$ bits for the same soundness error.) We have implemented both our new protocol and its closest competitors from the literature; in accordance with our analytic results, experiments confirm that the new protocols handily outperform existing protocols for all but the shortest of vectors (roughly, for vectors with more than 16-32 elements).
Expand

26 August 2019

Kota Kinabalu, Malaysia, 10 February - 14 February 2020
Event Calendar Event Calendar
Event date: 10 February to 14 February 2020
Submission deadline: 17 September 2019
Notification: 15 November 2019
Expand
Prasanna Raghaw Mishra, Bhartendu Nandan, Navneet Gaba
ePrint Report ePrint Report
In this paper we present our observations about NIST's Compression estimate test given in SP-800 90B. We observe that steps 4 and 7 of the test may be re-framed to gain efficiency. Based on our observations, we propose a modified algorithm for the test which is twice as fast as the NIST's algorithm. We further claim that the values of probability and min-entropy in the example given for the test are incorrect. We also provide computational evidence in support of this claim.
Expand
Junichi Tomida, Yuto Kawahara, Ryo Nishimaki
ePrint Report ePrint Report
Attribute-based encryption (ABE) is an advanced cryptographic tool and useful to build various types of access control systems. Toward the goal of making ABE more practical, we propose key-policy (KP) and ciphertext-policy (CP) ABE schemes, which first support unbounded sizes of attribute sets and policies with negation and multi-use of attributes, allow fast decryption, and are fully secure under a standard assumption, simultaneously. The proposed schemes are more expressive than previous schemes and efficient enough. We also implement our schemes in 128-bit security level and present their benchmarks for an ordinary personal computer and smartphones. They show that all algorithms run in one second with the personal computer when they handle any policy or attribute set with one hundred attributes.
Expand
Andrea Caforio, F Betül Durak, Serge Vaudenay
ePrint Report ePrint Report
Ratcheting communication strengthens privacy, specifically in the presence of internal state exposures or random coin corruptions. This is called post-compromise security. There have been several such secure protocols proposed in the last few years. The strongest level of security comes with a high cost, because of the need for HIBE or at least public-key cryptography.

In this paper, we first design a lightweight protocol called liteARCAD which is solely based on symmetric cryptography, hence only forward secure.

We then present a generic hybrid protocol allowing to compose any two protocols so that the sender can select which of the two protocols to use. When composing liteARCAD and a post-compromise secure protocol, the sender can decide to ratchet or not. For instance, the sender can ratchet once a while, or after letting his device unattended. When doing so with infrequent ratchet, we obtain the strongest security at the price of efficient symmetric cryptography.

We then propose the notion of security awareness. This lets a sender learns, after a while, if his message was safely received (i.e. if it was received and if no adversary can decrypt it, except from trivial attacks) and that no finished active attack occurred (i.e. active attack must continue forever or be detected). We finally propose a generic strengthening to add security awareness to any protocol.
Expand
Georg Fuchsbauer
ePrint Report ePrint Report
While fair exchange of goods is known to be impossible without assuming a trusted party, smart contracts in cryptocurrencies forgo such parties by assuming trust in the currency system. They allow a seller to sell a digital good, which the buyer will obtain if and only if she pays. Zero-knowledge contingent payments (zkCP) show that, despite the limited expressiveness of its scripting language, this is even possible in Bitcoin by using zero-knowledge proofs.

At CCS'17, Campanelli, Gennaro, Goldfeder and Nizzardo showed that the zkCP protocol was flawed, in that the buyer could obtain information about the good without paying. They proposed countermeasures to repair zkCP and moreover observed that zkCP cannot be used when a service is sold. They introduce the notion of ZK contingent payments for services and give an instantiation based on a witness-indistinguishable (WI) proof system.

We show that the main countermeasures they proposed are not sufficient and present an attack against their fixed zkCP scheme. We also show that their realization of zkCP for services is insecure, as the buyer could learn the desired information (i.e., whether the service was provided) without paying; in particular, we show that WI of the used proof system is not enough.
Expand
Pascal Aubry, Sergiu Carpov, Renaud Sirdey
ePrint Report ePrint Report
In somewhat homomorphic encryption schemes (e.g. B/FV, BGV) the size of ciphertexts and the execution performance of homomorphic operations depends heavily on the multiplicative depth. The multiplicative depth is the maximal number of consecutive multiplications for which an homomorphic encryption scheme was parameterized. In this work we propose an improved multiplicative depth minimization heuristic. In particular, a new circuit rewriting operator is introduced, the so called cone rewrite operator. The results we obtain using the new method are relevant in terms of accuracy and performance. Smaller multiplicative depths for a benchmark of Boolean circuits are obtained when compared to a previous work found in the literature. In average, the multiplicative depth is highly improved and the new heuristic execution time is significantly lower. The proposed rewrite operator and heuristic are not limited to Boolean circuits, but can also be used for arithmetic circuits.
Expand
Rishab Goyal, Satyanarayana Vusirikala, Brent Waters
ePrint Report ePrint Report
Over the last few years there has been a surge of new cryptographic results, including laconic oblivious transfer, (anonymous/ hierarchical) identity-based encryption, trapdoor functions, chosen-ciphertext security transformations, designated-verifier zero knowledge proofs, due to a beautiful framework recently introduced in the works of Cho et al. [Crypto 2017], and D{\"{o}}ttling and Garg [Crypto 2017]. The primitive of one-way function with encryption (OWFE) and its relatives (chameleon encryption, one-time signatures with encryption, hinting PRGs, trapdoor hash encryption, batch encryption) have been a centerpiece in all these results.

While there exist multiple realizations of OWFE (and its relatives) from a variety of assumptions such as CDH, Factoring, and LWE, all such constructions fall under the same general ``missing block" framework. Although this framework has been instrumental in opening up a new pathway towards various cryptographic functionalities via the abstraction of OWFE (and its relatives), it has been accompanied with undesirable inefficiencies that has inhibited a much wider adoption in many practical scenarios. Motivated by the surging importance of the OWFE abstraction (and its relatives), a natural question to ask is whether the existing approaches can be diversified to not only obtain more constructions from different assumptions, but also in developing newer frameworks. We believe answering this question will eventually lead to important and previously unexplored performance trade-offs in the overarching applications of this novel cryptographic paradigm.

In this work, we propose a new ``accumulation-style" framework for building a new class of OWFE as well as hinting PRG constructions with a special focus on achieving shorter ciphertext size and shorter public parameter size (respectively). Such performance improvements parlay into shorter parameters in their corresponding applications. Briefly, we explore the following performance trade-offs --- (1) for OWFE, our constructions outperform in terms of ciphertext size as well as encryption time, but this comes at the cost of larger evaluation and setup times, (2) for hinting PRGs, our constructions provide a rather dramatic trade-off between evaluation time versus parameter size, with our construction leading to significantly shorter public parameter size. We also provide concrete performance measurements for our constructions and compare them with existing approaches. We believe highlighting such trade-offs will lead to a wider adoption of these abstractions in a practical sense.
Expand

23 August 2019

Christian Mouchet, Juan Troncoso-Pastoriza, Jean-Pierre Hubaux
ePrint Report ePrint Report
In this work, we advance the conceptual and technical aspects of Secure Multiparty Computation (SMC). We approach SMC as a computational problem and propose a novel formulation of this problem in terms of trust boundaries. From this formulation, we derive a general framework that enables a more comprehensive characterization of both the SMC problem and its solutions. Existing SMC solutions are commonly seen as diametrically different and incompatible, but we show how they can be mapped to particular instances of our framework, hence enabling their analysis under a common and unified basis. In this framework, the core component of an SMC solution is a distributed homomorphic cryptosystem. We show that the features this cryptosystem provides determine the need for interaction and overall efficiency of the corresponding SMC solutions. Based on this analysis, we introduce a practical instantiation of our framework by proposing a distributed version of the Brakerski-Fan-Vercauteren (BFV) lattice-based homomorphic cryptosystem. We analyze the security, noise overhead, and computational costs of this scheme. Due to its conceptual simplicity and efficiency, our solution has great potential for addressing highly relevant scenarios, such as secure data-sharing and machine-learning. Hence, this work constitutes a step forward in secure computation, by enabling computation across trust boundaries.
Expand
Subhabrata Samajder, Palash Sarkar
ePrint Report ePrint Report
In the context of linear cryptanalysis of block ciphers, let $p_0$ (resp. $p_1$) be the probability that a particular linear approximation holds for the right (resp. a wrong) key choice. The standard right key randomisation hypothesis states that $p_0$ is a constant $p\neq 1/2$ and the standard wrong key randomisation hypothesis states that $p_1=1/2$. Using these hypotheses, the success probability $P_S$ of the attack can be expressed in terms of the data complexity $N$. The resulting expression for $P_S$ is a monotone increasing function of $N$.

Building on earlier work by Daemen and Rijmen (2007), Bogdanov and Tischhauser (2014) argued that $p_1$ should be considered to be a random variable. They postulated the adjusted wrong key randomisation hypothesis which states that $p_1$ follows a normal distribution. A non-intuitive consequence was that the resulting expression for $P_S$ is no longer a monotone increasing function of $N$. A later work by Blondeau and Nyberg (2017) argued that $p_0$ should also be considered to be a random variable and they postulated the adjusted right key randomisation hypothesis which states that $p_0$ follows a normal distribution.

In this work, we revisit the key randomisation hypotheses. While the argument that $p_0$ and $p_1$ should be considered to be random variables is indeed valid, we consider the modelling of their distributions by normal to be inappropriate. Being probabilities, the support of the distributions of $p_0$ and $p_1$ should be subsets of $[0,1]$ which does not hold for normal distributions. We show that if $p_0$ and $p_1$ follow any distributions with supports which are subsets of $[0,1]$, and $E[p_0]=p$ and $E[p_1]=1/2$, then the expression for $P_S$ that is obtained is exactly the same as the one obtained using the standard key randomisation hypotheses. Consequently, $P_S$ is a monotone increasing function of $N$ even when $p_0$ and $p_1$ are considered to be random variables.
Expand
Seungkwang Lee, Myungchul Kim
ePrint Report ePrint Report
Differential Fault Analysis (DFA) intentionally injects some fault into the encryption process and analyzes a secret key from the mathematical relationship between faulty and fault-free ciphertexts. Even white-box cryptographic implementations are still vulnerable to DFA. A common way to defend DFA is to use some type of redundancy such as time or hardware redundancy. However, previous work on software-based redundancy method can be easily bypassed by white-box attackers, who can access and even modify all resources. In this paper, we propose a secure software redundancy named table redundancy that exploits the characteristic of table diversity in white-box cryptography. We show how to apply this table redundancy technique to a white-box AES implementation with a 128-bit key. To prevent significant degradation of performance, the lookup tables which are not under DFA are shared and table redundancy are applied to the inner rounds under DFA. The outputs of the redundant computations are the SubBytes output multiplied by the MixColumns matrix in the 9-th round and encoded by different transformations. The XOR operation combines those redundant intermediate values and the combined transformation is canceled out in the following shared part of the encryption. Our security analysis shows that a success probability of DFA on our table redundancy is negligible and a brute-force attack becomes too costly. With three redundant computations, the total table size and the number of lookups are less than double compared to a non-protected implementation.
Expand
Gabrielle Beck, Maximilian Zinkus, Matthew Green
ePrint Report ePrint Report
In this work we investigate the problem of automating the development of adaptive chosen ciphertext attacks on systems that contain vulnerable format oracles. Unlike previous attempts, which simply automate the execution of known attacks, we consider a more challenging problem: to programmatically derive a novel attack strategy, given only a machine-readable description of the plaintext verification function and the malleability characteristics of the encryption scheme. We present a new set of algorithms that use SAT and SMT solvers to reason deeply over the design of the system, producing an automated attack strategy that can entirely decrypt protected messages. Developing our algorithms required us to adapt techniques from a diverse range of research fields, as well as to explore and develop new ones. We implement our algorithms using existing theory solvers. The result is a practical tool called Delphinium that succeeds against real-world and contrived format oracles. To our knowledge, this is the first work to automatically derive such complex chosen ciphertext attacks.
Expand
Nigel P. Smart, Titouan Tanguy
ePrint Report ePrint Report
We propose a mechanism for an m-party dishonest majority Multi-Party Computation (MPC) protocol to obtain the required pre-processing data (called Beaver Triples), from a subset of a set of cloud service providers; providing a form of TaaS (Triples-as-a-Service). The service providers used by the MPC computing parties can be selected dynamically at the point of the MPC computation being run, and the interaction between the MPC parties and the TaaS parties is via a single round of ommunication, logged on a public ledger. The TaaS is itself instantiated as an MPC protocol which produces the triples for a different access structure. Thus our protocol also acts as a translation mechanism between the secret sharing used by one MPC protocol and the other.
Expand

22 August 2019

Brno, Czech Republic, 30 January - 3 April 2020
Event Calendar Event Calendar
Event date: 30 January to 3 April 2020
Submission deadline: 15 September 2019
Notification: 10 November 2019
Expand
◄ Previous Next ►