International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

06 June 2018

Yackolley Amoussou-Guenou, Antonella Del Pozzo, Maria Potop-Butucaru, Sara Tucci-Piergiovanni
ePrint Report ePrint Report
Tendermint-core blockchains offer strong consistency (no forks) in an open system relying on two ingredients (i) a set of validators that generate blocks via a variant of Practical Byzantine Fault Tolerant (PBFT) consensus protocol and (ii) a rewarding mechanism that dynamically selects nodes to be validators for the next block via proof-of-stake, a non-energy consuming alternative of proof-of-work. It is well-known that in those open systems the main threat is the tragedy of commons that may yield the system to collapse if the rewarding mechanism is not adequate. At minima the rewarding mechanism must be $fair$, i.e. distributing the rewards in proportion to the merit of participants. The contribution of this paper is two-fold. First, we provide a formal description of Tendermint-core protocol and we prove that in eventual synchronous systems (i) it verifies a variant of one-shot consensus for the validation of one single block and (ii) a variant of the repeated consensus problem for multiple blocks. Our second contribution relates to the fairness of Tendermint rewarding mechanism. We prove that Tendermint rewarding is not fair. However, a small twist in the protocol makes it eventually fair. Additionally, we prove that there exists an (eventual) fair rewarding mechanism in repeated consensus-based blockchains if and only if the system is (eventually) synchronous.
Expand

05 June 2018

Farnoud Farahmand, William Diehl, Abubakr Abdulgadir, Jens-Peter Kaps, Kris Gaj
ePrint Report ePrint Report
Authenticated ciphers offer potential benefits to resource-constrained devices in the Internet of Things (IoT). The CAESAR competition seeks optimal authenticated ciphers based on several criteria, including performance in resource-constrained (i.e., low-area, low-power, and low-energy) hardware. Although the competition specified a ”lightweight” use case for Round 3, most hardware submissions to Round 3 were not lightweight implementations, in that they employed architectures optimized for best throughput-to-area (TP/A) ratio, and used the Pre- and PostProcessor modules from the CAE-SAR Hardware (HW) Development Package designed for high-speed applications. In this research, we provide true lightweight implementations of selected ciphers (ACORN, NORX, CLOC-AES, SILC-AES, and SILC-LED). These implementations use an improved version of the CAESAR HW DevelopmentPackage designed for lightweight applications, and are fully compliant with the CAESAR HW Application programming interface for Authenticated Ciphers. Our lightweight implementations achieve an average of 55% reduction in area and40% reduction in power compared to their corresponding high-speed versions. Although the average energy per bit of lightweight ciphers increases by a factor of 3.6, the lightweight version of NORX actually uses 47% less energy per bit than its corresponding high-speed implementation.
Expand
Prabhanjan Ananth, Arka Rai Choudhuri, Aarushi Goel, Abhishek Jain
ePrint Report ePrint Report
We study the exact round complexity of secure multiparty computation (MPC) in the honest majority setting. We construct the following round-optimal protocols in the plain model:

- Security with abort: Assuming public-key encryption (PKE), we construct two round MPC that achieves security with abort against any $t<\frac{n}{2}$ malicious corruptions. Previously, the best known two round protocols only achieved weaker security notions against smaller corruption thresholds.

- Guaranteed output delivery: Assuming PKE, we construct two round MPC over broadcast and private channels that achieves security with guaranteed output delivery against any $t<\frac{n}{2}$ (semi-honest) fail-stop corruptions. This result overcomes the lower bounds of Gennaro et al. [CRYPTO'02] and Gordon et al. [CRYPTO'15]. With the additional assumption of Zaps, we also construct three round MPC in the broadcast model that achieves security with guaranteed output delivery against any $t<\frac{n}{2}$ malicious corruptions. Previously, such a protocol was only known in the common reference model, based on specific learning assumptions.

All of our results are obtained via general compilers that may be of independent interest.
Expand
Elette Boyle, Yuval Ishai, Antigoni Polychroniadou
ePrint Report ePrint Report
Secure computations on big data call for protocols that have sublinear communication complexity in the input length. While fully homomorphic encryption (FHE) provides a general solution to the problem, employing it on a large scale is currently quite far from being practical. This is also the case for secure computation tasks that reduce to weaker forms of FHE such as ''somewhat homomorphic encryption'' or single-server private information retrieval (PIR).

Quite unexpectedly, Aggarwal, Mishra, and Pinkas (Eurocrypt 2004), Brickell and Shmatikov (Asiacrypt 2005), and shelat and Venkitasubramaniam (Asiacrypt 2015) have shown that in several natural instances of secure computation on big data, there are practical sublinear communication protocols that only require sublinear local computation and minimize the use of expensive public-key operations. This raises the question of whether similar protocols exist for other natural problems.

In this paper we put forward a framework for separating ''practical'' sublinear protocols from ''impractical'' ones, and establish a methodology for identifying ''provably hard'' big-data problems that do not admit practical protocols. This is akin to the use of NP-completeness to separate hard algorithmic problems from easy ones. We show that while the previous protocols of Aggarwal et al., Brickell and Shmatikov, and shelat and Venkitasubramaniam are indeed classified as being ''practical'' in this framework, slight variations of the problems they solve and other natural computational problems on big data are hard.

Our negative results are established by showing that the problem at hand is ''PIR-hard'' in the sense that any secure protocol for the problem implies PIR on a large database. This imposes a barrier on the local computational cost of secure protocols for the problem. We also identify a new natural relaxation of PIR that we call semi-PIR, which is useful for establishing ''intermediate hardness'' of several practically motivated secure computation tasks. We show that semi-PIR implies slightly sublinear PIR via an adaptive black-box reduction and that ruling out a stronger black-box reduction would imply a major breakthrough in complexity theory. We also establish information-theoretic separations between semi-PIR and PIR, showing that some problems that we prove to be semi-PIR-hard are not PIR-hard.
Expand
Koji Chida, Daniel Genkin, Koki Hamada, Dai Ikarashi, Ryo Kikuchi, Yehuda Lindell, Ariel Nof
ePrint Report ePrint Report
Protocols for secure multiparty computation enable a set of parties to compute a function of their inputs without revealing anything but the output. The security properties of the protocol must be preserved in the presence of adversarial behavior. The two classic adversary models considered are semi-honest (where the adversary follows the protocol specification but tries to learn more than allowed by examining the protocol transcript) and malicious (where the adversary may follow any arbitrary attack strategy). Protocols for semi-honest adversaries are often far more efficient, but in many cases the security guarantees are not strong enough.

In this paper, we present new protocols for securely computing any functionality represented by an arithmetic circuit. We utilize a new method for verifying that the adversary does not cheat, that yields a cost of just twice that of semi-honest protocols in some settings. Our protocols are information-theoretically secure in the presence of a malicious adversaries, assuming an honest majority. We present protocol variants for small and large fields, and show how to efficiently instantiate them based on replicated secret sharing and Shamir sharing. As with previous works in this area aiming to achieve high efficiency, our protocol is secure with abort and does not achieve fairness, meaning that the adversary may receive output while the honest parties do not.

We implemented our protocol and ran experiments for different numbers of parties, different network configurations and different circuit depths. Our protocol significantly outperforms the previous best for this setting (Lindell and Nof, CCS 2017); for a large number of parties, our implementation runs almost an order of magnitude faster than theirs.
Expand
Andre Esser, Felix Heuer, Robert Kübler, Alexander May, and Christian Sohler
ePrint Report ePrint Report
The slightly subexponential algorithm of Blum, Kalai and Wasserman (BKW) provides a basis for assessing LPN/LWE security. However, its huge memory consumption strongly limits its practical applicability, thereby preventing precise security estimates for cryptographic LPN/LWE instantiations.

We provide the first time-memory trade-offs for the BKW algorithm. For instance, we show how to solve LPN in dimension $k$ in time $2^{\frac 43\frac k{\log k}}$ and memory $2^{\frac 23\frac k{\log k}}$. Using the Dissection technique due to Dinur et al. (Crypto ’12) and a novel, slight generalization thereof, we obtain fine-grained trade-offs for any available (subexponential) memory while the running time remains subexponential.

Reducing the memory consumption of BKW below its running time also allows us to propose a first quantum version QBKW for the BKW algorithm.
Expand
Shixiong Wang, Longjiang Qu, Chao Li, Shaojing Fu
ePrint Report ePrint Report
In this paper, we study the condition of finding small solutions $(x,y,z)=(x_0, y_0, z_0)$ of the equation $Bx-Ay=z$. The framework is derived from Wiener's small private exponent attack on RSA and May-Ritzenhofen's investigation about the implicit factorization problem, both of which can be generalized to solve the above equation. We show that these two methods, together with Coppersmith's method, are equivalent for solving $Bx-Ay=z$ in the general case. Then based on Coppersmith's method, we present two improvements for solving $Bx-Ay=z$ in some special cases. The first improvement pays attention to the case where either $\gcd(x_0,z_0,A)$ or $\gcd(y_0,z_0,B)$ is large enough. As the applications of this improvement, we propose some new cryptanalysis of RSA, such as new results about the generalized implicit factorization problem, attacks with known bits of the prime factor, and so on. The motivation of these applications comes from oracle based complexity of factorization problems. The second improvement assumes that the value of $C \equiv z_0\ (\mathrm{mod}\ x_0)$ is known. We present two attacks on RSA as its applications. One focuses on the case with known bits of the private exponent together with the prime factor, and the other considers the case with a small difference of the two prime factors. Our new attacks on RSA improve the previous corresponding results respectively, and the correctness of the approach is verified by experiments.
Expand
Aggelos Kiayias, Annabell Kuldmaa, Helger Lipmaa, Janno Siim, Thomas Zacharias
ePrint Report ePrint Report
In state-of-the-art e-voting systems, a bulletin board (BB) is a critical component for preserving election integrity and availability. Although it is common in the literature to assume that a BB is a centralized entity that is trusted, in the recent works of Culnane and Schneider [CSF 2014] and Chondros et al. [ICDCS 2016], the importance of removing BB as a single point of failure has been extensively discussed. Motivated by these works, we introduce a framework for the formal security analysis of the BB functionality modeled as a distributed system that comprises (i) a subsystem of item collection (IC) peers that receive and store the submitted items, and (ii) a subsystem of audit board (AB) peers, where the IC subsystem publishes the stored items for verification. Our framework treats a secure BB as a robust public transaction ledger, defined by Garay et al. [Eurocrypt 2015], that additionally supports the generation of receipts for successful posting. Namely, in our model, a secure BB system achieves Persistence and Liveness that are confirmable, in the sense that any malicious behavior of the BB system can be detected via a verification mechanism.

As a case study for our framework, we analyze the BB system of Culnane and Schneider and point out its weaknesses. We demonstrate an attack revealing that the said system does not achieve Confirmable Liveness, even in the case where the adversary is computationally bounded and covert, i.e., it may deviate from the protocol specification but does not want to be detected. In addition, we show that special care should be taken for the choice of the underlying cryptographic primitives, so that the claimed fault tolerance threshold of N/3 out-of N corrupted IC peers is preserved.

Next, based on our analysis, we introduce a new BB protocol that upgrades the [CSF 2014] protocol. We prove that it tolerates any number less than N/3 out-of N corrupted IC peers both for Persistence and Confirmable Liveness, against a computationally bounded general Byzantine adversary. Furthermore, Persistence can also be Confirmable, if we distribute the AB (originally a centralized entity in [CSF 2014]) as a replicated service with honest majority.
Expand

04 June 2018

Prabhanjan Ananth, Yuval Ishai, Amit Sahai
ePrint Report ePrint Report
We consider the problem of protecting general computations against constant-rate random leakage. That is, the computation is performed by a randomized boolean circuit that maps a randomly encoded input to a randomly encoded output, such that even if the value of every wire is independently leaked with some constant probability $p > 0$, the leakage reveals essentially nothing about the input.

In this work we provide a conceptually simple, modular approach for solving the above problem, providing a simpler and self-contained alternative to previous constructions of Ajtai (STOC 2011) and Andrychowicz et al. (Eurocrypt 2016). We also obtain several extensions and generalizations of this result. In particular, we show that for every leakage probability $p<1$, there is a finite basis B such that leakage-resilient computation with leakage probability $p$ can be realized using circuits over the basis B.

We obtain similar positive results for the stronger notion of leakage tolerance, where the input is not encoded, but the leakage from the entire computation can be simulated given random $p'$-leakage of input values alone, for any $p<p'<1$. Finally, we complement this by a negative result, showing that for every basis B there is some leakage probability $p<1$ such that for any $p'<1$, leakage tolerance as above cannot be achieved in general.

We show that our modular approach is also useful for protecting computations against worst case leakage. In this model, we require that leakage of any t (adversarially chosen) wires reveal nothing about the input. By combining our construction with a previous derandomization technique of Ishai et al. (ICALP 2013), we show that security in this setting can be achieved with $O(t^{1+\varepsilon})$ random bits, for every constant $\varepsilon > 0$. This (near-optimal) bound significantly improves upon previous constructions that required more than $t^{3}$ random bits.
Expand
Jung Hee Cheon, Andrey Kim
ePrint Report ePrint Report
Homomorphic Encryption for Arithmetic of Approximate Numbers (HEAAN) with its vector packing technique proved its potential in cryptographic applications. In this paper, we propose MHEAAN - a generalization of the HEAAN scheme to a multivariate case. Our design takes advantage of the HEAAN scheme, that the precision losses during the evaluation are limited by the depth of the circuit, and it exceeds no more than one bit compared to unencrypted approximate arithmetic, such as floating point operations. In addition, with a multivariate structure of the plaintext space, we suggest a general method of packing multidimensional structures as matrices and tensors in a single ciphertext. We provide a concrete two-dimensional construction and show the efficiency of our scheme on several matrix operations, such as matrix transposition, matrix multiplication, and inverse.
Expand
Yuyu Wang, Takahiro Matsuda, Goichiro Hanaoka, Keisuke Tanaka
ePrint Report ePrint Report
In this work, we show negative results on the tamper-resilience of a wide class of cryptographic primitives with uniqueness properties, such as unique signatures, verifiable random functions, signatures with unique keys, injective one-way functions, and encryption schemes with a property we call unique-message property. Concretely, we prove that for these primitives, it is impossible to derive their (even extremely weak) tamper-resilience from any common assumption, via black-box reductions. Our proofs exploit the simulatable attack paradigm proposed by Wichs (ITCS ’13), and the tampering model we treat is the plain model, where public parameters and public/secret key pairs are potentially tampered with.
Expand
Tim van de Kamp, Andreas Peter, Maarten H. Everts, Willem Jonker
ePrint Report ePrint Report
We propose the first multi-client predicate-only encryption scheme capable of efficiently testing the equality of two encrypted vectors. Our construction can be used for the privacy-preserving monitoring of relations among multiple clients. Since both the clients’ data and the predicates are encrypted, our system is suitable for situations in which this information is considered sensitive. We prove our construction plaintext and predicate private in the generic bilinear group model using random oracles, and secure under chosen-plaintext attack with unbounded corruptions under the symmetric external Diffie-Hellman assumption. Additionally, we provide a proof-of-concept implementation that is capable of evaluating one thousand predicates defined over the inputs of ten clients in less than a minute on commodity hardware.
Expand
Gilles Barthe, Sonia Belaïd, Pierre-Alain Fouque, Benjamin Grégoire
ePrint Report ePrint Report
Masking is a popular countermeasure for protecting both hardware and software implementations against differential power analysis. A main strength of software masking is that its security guarantees can be captured formally through well-established models. The existence of such models, and their relation with probabilistic information flow studied in formal methods, has been instrumental to the emergence of fully automated methods for analyzing masked implementations. In particular, state-of-the-art tools such as maskVerif (Barthe et al., EUROCRYPT 2015), have been used successfully for analyzing masked implementations at high orders. In contrast, security models for hardware implementations have remained somewhat less developed, and no prior verification tool has accommodated hardware-specific sources of vulnerabilities such as glitches. Recently, Bloem et al. formalize the security of masked hardware implementations against glitches and give a method based on SAT solvers for verifying security automatically. However, their method works for small functionalities and low orders.

In this paper, we extend maskVerif tool (Barthe et al., EUROCRYPT 2015) with a unified framework to efficiently and formally verify both software and hardware implementations. In this process, we introduce a simple but expressive intermediate language. Our representation requires that each instruction is instrumented with leakage expressions that may depend on the expressions that arise in the instruction and on previous computation. Despite its simplicity, our intermediate representation covers a broad range of models from the literature; moreover, it is also easily amenable to efficient formal verification.

Our results significantly improve over prior work, both in terms of coverage and efficiency. In particular, we demonstrate that our tool is able to analyze examples from (Bloem et al, EUROCRYPT 2018) much faster and at high orders.
Expand
Emmanuelle Anceaume, Antonella Del Pozzo, Romaric Ludinard, Maria Potop-Butucaru, Sara Tucci-Piergiovanni
ePrint Report ePrint Report
The presented work continues the line of recent distributed computing community efforts dedicated to the theoretical aspects of blockchains. This paper is the first to specify blockchains as a composition of abstract data types all together with a hierarchy of consistency criteria that formally characterizes the histories admissible for distributed programs that use them. Our work is based on an original oracle-based construction that, along with new consistency definitions, captures the eventual convergence process in blockchain systems. The paper presents as well some results on implementability of the presented abstractions and a mapping of representative existing blockchains from both academia and industry in our framework.
Expand
Carsten Baum, Jonathan Bootle, Andrea Cerulli, Rafael del Pino, Jens Groth, Vadim Lyubashevsky
ePrint Report ePrint Report
We propose the first zero-knowledge argument with sub-linear communication complexity for arithmetic circuit satisfiability over a prime $p$ whose security is based on the hardness of the short integer solution (SIS) problem. For a circuit with $N$ gates, the communication complexity of our protocol is $O\left(\sqrt{N\lambda\log^3{N}}\right)$, where $\lambda$ is the security parameter. A key component of our construction is a surprisingly simple zero-knowledge proof for pre-images of linear relations whose amortized communication complexity depends only logarithmically on the number of relations being proved. This latter protocol is a substantial improvement, both theoretically and in practice, over the previous results in this line of research of Damg{\aa}rd et al. (CRYPTO 2012), Baum et al. (CRYPTO 2016), Cramer et al. (EUROCRYPT 2017) and del Pino and Lyubashevsky (CRYPTO 2017), and we believe it to be of independent interest.
Expand
Marshall Ball, Alon Rosen, Manuel Sabin, Prashant Nalini Vasudevan
ePrint Report ePrint Report
We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC '17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a `proof of concept' of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs' structure can be exploited in ways previous heuristic PoWs could not.

This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS '16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC '17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.

Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.
Expand
Phillip Rogaway, Yusi Zhang
ePrint Report ePrint Report
Often the simplest way of specifying game-based cryptographic definitions is apparently barred because the adversary would have some trivial win. Disallowing or invalidating these wins can lead to complex or unconvincing definitions. We suggest a generic way around this difficulty. We call it indistinguishability up to correctness, or IND|C. Given games G and H and a correctness condition C we define an advantage measure Adv_{G,H,C}^indc wherein G/H distinguishing attacks are effaced to the extent that they are inevitable due to C. We formalize this in the language of oracle silencing, an alternative to exclusion-style and penalty-style definitions. We apply our ideas to a domain where game-based definitions have been cumbersome: stateful authenticated-encryption (sAE). We rework existing sAE notions and encompass new ones, like replay-free AE permitting a specified degree of out-of-order message delivery.
Expand
Shashank Agrawal, Chaya Ganesh, Payman Mohassel
ePrint Report ePrint Report
The two most common ways to design non-interactive zero-knowledge (NIZK) proofs are based on Sigma protocols and QAP-based SNARKs. The former is highly efficient for proving algebraic statements while the latter is superior for arithmetic representations.

Motivated by applications such as privacy-preserving credentials and privacy-preserving audits in cryptocurrencies, we study the design of NIZKs for composite statements that compose algebraic and arithmetic statements in arbitrary ways. Specifically, we provide a framework for proving statements that consist of ANDs, ORs and function compositions of a mix of algebraic and arithmetic components. This allows us to explore the full spectrum of trade-offs between proof size, prover cost, and CRS size/generation cost. This leads to proofs for statements of the form: knowledge of $x$ such that $SHA(g^x)=y$ for some public $y$ where the prover's work is 500 times fewer exponentiations compared to a QAP-based SNARK at the cost of increasing the proof size to 2404 group and field elements. In application to anonymous credentials, our techniques result in 8 times fewer exponentiations for the prover at the cost of increasing the proof size to 298 elements.
Expand
Viet Tung Hoang, Stefano Tessaro, Ni Trieu
ePrint Report ePrint Report
Format-preserving encryption (FPE) produces ciphertexts which have the same format as the plaintexts. Building secure FPE is very challenging, and recent attacks (Bellare, Hoang, Tessaro, CCS '16; Durak and Vaudenay, CRYPTO '17) have highlighted security deficiencies in the recent NIST SP800-38G standard. This has left the question open of whether practical schemes with high security exist.

In this paper, we continue the investigation of attacks against FPE schemes. Our first contribution are new known-plaintext message recovery attacks against Feistel-based FPEs (such as FF1/FF3 from the NIST SP800-38G standard) which improve upon previous work in terms of amortized complexity in multi-target scenarios, where multiple ciphertexts are to be decrypted. Our attacks are also qualitatively better in that they make no assumptions on the correlation between the targets to be decrypted and the known plaintexts. We also surface a new vulnerability specific to FF3 and how it handles odd length domains, which leads to a substantial speedup in our attacks.

We also show the first attacks against non-Feistel based FPEs. Specifically, we show a strong message-recovery attack for FNR, a construction proposed by Cisco which replaces two rounds in the Feistel construction with a pairwise-independent permutation, following the paradigm by Naor and Reingold (JoC, '99). We also provide a strong ciphertext-only attack against a variant of the DTP construction by Brightwell and Smith, which is deployed by Protegrity within commercial applications.

All of our attacks show that existing constructions fall short of achieving desirable security levels. For Feistel and the FNR schemes, our attacks become feasible on small domains, e.g., 8 bits, for suggested round numbers. Our attack against the DTP construction is practical even for large domains. We provide proof-of-concept implementations of our attacks that verify our theoretical findings.
Expand
Sanjam Garg, Mohammad Hajiabadi, Mohammad Mahmoody, Ameer Mohammed
ePrint Report ePrint Report
Understanding whether public-key encryption can be based on one-way functions is a fundamental open problem in cryptography. The seminal work of Impagliazzo and Rudich [STOC'89] shows that black-box constructions of public-key encryption from one-way functions are impossible. However, this impossibility result leaves open the possibility of using non-black-box techniques for achieving this goal.

One of the most powerful classes of non-black-box techniques, which can be based on one-way functions (OWFs) alone, is Yao's garbled circuit technique [FOCS'86]. As for the non-black-box power of this technique, the recent work of Dottling and Garg [CRYPTO'17] shows that the use of garbling allows us to circumvent known black-box barriers in the context of identity-based encryption.

We prove that garbling of circuits that have OWF (or even random oracle) gates in them are insufficient for obtaining public-key encryption. Additionally, we show that this model also captures (non-interactive) zero-knowledge proofs for relations with OWF gates. This indicates that currently known OWF-based non-black-box techniques are perhaps insufficient for realizing public-key encryption.
Expand
◄ Previous Next ►