International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

25 February 2019

Dan Boneh, Elette Boyle, Henry Corrigan-Gibbs, Niv Gilboa, Yuval Ishai
ePrint Report ePrint Report
We introduce and study the notion of fully linear probabilistically checkable proof systems. In such a proof system, the verifier can make a small number of linear queries that apply jointly to the input and a proof vector.

Our new type of proof system is motivated by applications in which the input statement is not fully available to any single verifier, but can still be efficiently accessed via linear queries. This situation arises in scenarios where the input is partitioned or secret-shared between two or more parties, or alternatively is encoded using an additively homomorphic encryption or commitment scheme. This setting appears in the context of secure messaging platforms, verifiable outsourced computation, PIR writing, private computation of aggregate statistics, and secure multiparty computation (MPC). In all these applications, there is a need for fully linear proof systems with short proofs.

While several efficient constructions of fully linear proof systems are implicit in the interactive proofs literature, many questions about their complexity are open. We present several new constructions of fully linear zero-knowledge proof systems with sublinear proof size for "simple" or "structured" languages. For example, in the non-interactive setting of fully linear PCPs, we show how to prove that an input vector $x\in\mathbb{F}^n$ satisfies a single degree-2 equation with a proof of size $O(\sqrt n)$ and $O(\sqrt n)$ linear queries, which we show to be optimal. More generally, for languages that can be recognized by systems of constant-degree equations, we can reduce the proof size to $O(\log n)$ at the cost of $O(\log n)$ rounds of interaction.

We use our new proof systems to construct new short zero-knowledge proofs on distributed and secret-shared data. These proofs can be used to improve the performance of many of the example systems mentioned above.

Finally, we observe that zero-knowledge proofs on distributed data provide a general-purpose tool for protecting protocols for secure multiparty computation (MPC) against malicious parties. Applying our short fully linear PCPs to "natural" MPC protocols in the honest-majority setting, we can achieve unconditional protection against malicious parties with sublinear additive communication cost. We use this to improve the communication complexity of recent honest-majority MPC protocols. For instance, using any pseudorandom generator, we obtain a 3-party protocol for Boolean circuits in which the amortized communication cost is only one bit per AND gate per party (compared to 10 bits in the best previous protocol), matching the best known protocols for semi-honest adversaries.
Expand
Antoine Joux
ePrint Report ePrint Report
In this paper, we recast state-of-the-art constructions for fully homomorphic encryption in the simple language of arithmetic modulo large Fermat numbers. The techniques used to construct our scheme are quite standard in the realm of (R)LWE based cryptosystems. However, the use of arithmetic in such a simple ring greatly simplifies exposition of the scheme and makes its implementation much easier.

In terms of performance, our test implementation of the proposed scheme is slower than the current speed records but remains within a comparable range. We hope that the detailed study of our simplified scheme by the community can make it competitive and provide new insights into FHE constructions at large.
Expand
Jiangshan Yu, Man Ho Allen Au, Paulo Esteves-Verissimo
ePrint Report ePrint Report
We develop new foundations on transaction untraceability for CryptoNote-style blockchain systems. In particular, we observe new attacks; develop theoretical foundations to model transaction untraceability; provide the least upper bound of transaction untraceability guarantee; provide ways to efficiently and automatically verify whether a given ledger achieves optimal transaction untraceability; and provide a general solution that achieves provably optimal transaction untraceability.

Unlike previous cascade effect attacks (ESORICS’ 17 and PETS’ 18) on CryptoNote-style transaction untraceability, we consider not only a passive attacker but also an active adaptive attacker. Our observed attacks allow both types of attacker to trace blockchain transactions that cannot be traced by using the existing attacks. We develop a series of new games, which we call “The Sun-Tzu Survival Problem”, to model CryptoNote-style blockchain transaction untraceability and our identified attacks. In addition, we obtain seven novel results, where three of them are negative and the rest are positive. In particular, thanks to our abstract game, we are able to build bipartite graphs to model transaction untraceability, and provide reductions to formally relate the hardness of calculating untraceability to the hardness of calculating the number of perfect matchings in all possible bipartite graphs. We prove that calculating transaction untraceability is a #P−complete problem, which is believed to be even more difficult to solve than NP problems. In addition, we provide the first result on the least upper bound of transaction untraceability. Moreover, through our theoretical results, we are able to provide ways to efficiently and automatically verify whether a given ledger achieves optimal transaction untraceability. Furthermore, we propose a simple strategy for CryptoNote-style blockchain systems to achieve optimal untraceability. We take Monero as a concrete example to demonstrate how to apply this strategy to optimise the untraceability guarantee provided by Monero.
Expand
Ralph Ankele, Christoph Dobraunig, Jian Guo, Eran Lambooij, Gregor Leander, Yosuke Todo
ePrint Report ePrint Report
The design and analysis of dedicated tweakable block ciphers is a quite recent and very active research field that provides an ongoing stream of new insights. For instance, results of Kranz, Leander, and Wiemer from FSE 2017 show that the addition of a tweak using a linear tweak schedule does not introduce new linear characteristics. In this paper, we consider --- to the best of our knowledge --- for the first time the effect of the tweak on zero-correlation linear cryptanalysis for ciphers that have a linear tweak schedule. It turns out that the tweak can often be used to get zero-correlation linear hulls covering more rounds compared to just searching zero-correlation linear hulls on the data-path of a cipher. Moreover, this also implies the existence of integral distinguishers on the same number of rounds. We have applied our technique on round reduced versions of QARMA, MANTIS, and Skinny. As a result, we can present --- to the best of our knowledge --- the best attack (with respect to number of rounds) on a round-reduced variant of QARMA.
Expand
William Diehl, Farnoud Farahmand, Abubakr Abdulgadir, Jens-Peter Kaps, Kris Gaj
ePrint Report ePrint Report
Authenticated ciphers potentially provide resource savings and security improvements over the joint use of secret-key ciphers and message authentication codes. The CAESAR competition has aimed to choose the most suitable authenticated ciphers for several categories of applications, including a lightweight use case, for which the primary criteria are performance in resource-constrained devices, and ease of protection against side channel attacks (SCA). In March 2018, two of the candidates from this category, ACORN and Ascon, were selected as CAESAR contest finalists. In this research, we compare two SCA-resistant FPGA implementations of ACORN and Ascon, where one set of implementations has area consumption nearly equivalent to the defacto standard AES-GCM, and the other set has throughput (TP) close to that of AES-GCM. The results show that protected implementations of ACORN and Ascon, with area consumption less than but close to AES-GCM, have 23.3 and 2.5 times, respectively, the TP of AES-GCM. Likewise, implementations of ACORN and Ascon with TP greater than but close to AES-GCM, consume 18 percent and 74 percent of the area, respectively, of AES-GCM.
Expand
Katherine E. Stange
ePrint Report ePrint Report
We provide several reductions of Ring-LWE problems to smaller Ring-LWE problems in the presence of samples of a restricted form (i.e. (a,b) such that a is restricted to a subring, or multiplicative coset of a subfield of one CRT factor). To create and exploit such restricted samples, we propose Ring-BKW, a version of the Blum-Kalai-Wasserman algorithm which respects the ring structure. It has several key advantages based on the ring structure, including smaller tables, reduced or eliminated back-substitution, and a new opportunity for parallelization. We focus on two-power cyclotomic Ring-LWE with parameters proposed for practical use, with the exception that many splitting types are considered. The orthogonality of the lattice for two-power cyclotomics is exploited. In general, higher residue degree is an advantage to attacks.
Expand
Prasanna Ravi, Sourav Sen Gupta, Anupam Chattopadhyay, Mustafa Khairallah, Zakaria Najm, Shivam Bhasin
ePrint Report ePrint Report
In state-of-the-art design paradigm, time, space and power efficiency are considered the primary design constraints. Quite often, this approach adversely impacts the security of the overall system, especially when security is adopted as a countermeasure after some vulnerability is identified. In this position paper, we motivate the idea that security should also be considered as an architectural design constraint in addition to time, space and power. We show that security and efficiency objectives along the three design axes of time, space and power are in fact tightly coupled while identifying that security stands in direct contrast with them across all layers of architectural design. We attempt to prove our case utilizing a proof-by-evidence approach wherein we refer to various works across literature that explicitly imply the eternal conflict between security and efficiency. Thus, security has to be treated as a design constraint from the very beginning. Additionally, we advocate a security-aware design flow starting from the choice of cryptographic primitives, protocols and system design.
Expand
Jesper Buus Nielsen, Mark Simkin
ePrint Report ePrint Report
Threshold secret sharing allows a dealer to split a secret into $n$ shares such that any authorized subset of cardinality at least $t$ of those shares efficiently reveals the secret, while at the same time any unauthorized subset of cardinality less than $t$ contains no information about the secret. Leakage-resilience requires that the secret remains hidden even if the unauthorized subset is given a bounded amount of additional leakage from every share.

In this work, we study leakage-resilient secret sharing schemes with information-theoretic security and prove a lower bound on the share size and the required amount randomness of any such scheme. We prove that for any leakage-resilient secret sharing scheme either the amount of randomness across all shares or the share size has to be linear in $n$. More concretely, for a secret sharing scheme with $p$-bit long shares, $\ell$-bit leakage per share, where $\widehat{t}$ shares uniquely define the remaining $n - \widehat{t}$ shares, it has to hold that $$ p \ge \frac{\ell (n - t)}{\widehat{t}}\ . $$

We use our lower bound to gain further insights into a question that was recently posed by Benhamouda et al. (CRYPTO'18), who ask to what extend existing regular secret sharing schemes already provide protection against leakage. The authors proved that Shamir's secret sharing is leakage-resilient for very large thresholds $t = n - \mathcal{O}(\log n)$ and conjectured that it is also $1$-bit leakage-resilient for any threshold that is a constant fraction of the total number of shares. We do not disprove their conjecture, but show that two mild and natural strengthenings thereof are false, thus concluding that their conjecture is essentially the best one could hope for.

The results described above only apply to secret sharing schemes with information-theoretic security, since the lower bound proof relies on an adversary that can enumerate all possible secret sharings and thus runs in time exponential in at least the share size $p$ and the full reconstruction threshold $\widehat{t}$. In addition, we present a conceptually simple and highly efficient attack for the specific case of $2$-out-of-$n$ Shamir secret sharing that only requires $1$ bit of leakage, has a running time of $\bigO{n}$ field operations, and could easily be run in practice by a computationally bounded adversary.
Expand
David Wong
ePrint Report ePrint Report
At Real World Crypto 2017, Joan Daemen won the Levchin Prize and announced that he believed permutation-based crypto was the future of symmetric cryptography. At the same conference Mike Hamburg introduced Strobe, a symmetric protocol framework capable of protecting sessions as well as building symmetric cryptographic primitives for the single cost of Joan Daemen’s permutation Keccak. The next year, at Real World Crypto 2018 Trevor Perrin came to talk about the Noise protocol framework, a modern TLS-like protocol with similar traits but with a focus on flexibility, offering many handshake patterns to choose from in order to authenticate peers of a connection in different ways. Disco is the natural merge of the two projects, creating a new protocol based solely on two unique primitives: Curve25519 and the Keccak permutation (or more correctly its wrapper Strobe). Experimental results show that a library based on Disco can be implemented on top of these two cryptographic primitives with only a thousand lines of code. This, while offering both a flexible way to encryption sessions and a complete cryptographic library for all of an application’s needs.
Expand
Yue Guo, Rafael Pass, Elaine Shi
ePrint Report ePrint Report
Murphy, Murky, Mopey, Moody, and Morose decide to write a paper together over the Internet and submit it to the prestigious CRYPTO’19 conference that has the most amazing PC. They encounter a few problems. First, not everyone is online every day: some are lazy and go skiing on Mondays; others cannot use git correctly and they are completely unaware that they are losing messages. Second, a small subset of the co-authors may be secretly plotting to disrupt the project (e.g., because they are writing a competing paper in stealth). Suppose that each day, sufficiently many honest co-authors are online (and use git correctly); moreover, suppose that messages checked into git on Monday can be correctly received by honest and online co-authors on Tuesday or any future day. Can the honest co-authors successfully finish the paper in a small number of days such that they make the CRYPTO deadline; and perhaps importantly, can all the honest co-authors, including even those who are lazy and those who sometimes use git incorrectly, agree on the final theorem?
Expand
Rohit Sinha, Sivanarayana Gaddam, Ranjit Kumaresan
ePrint Report ePrint Report
In light of widespread misuse of personal data, we enable users to control the sharing and use of their data, even when offline, by binding that data to policies. A policy specifies the allowed function, conditions guarding the execution (based on the history of all prior computations on that data), and identities of the input providers and output recipients. For this level of control, we aim for a compute system that ensures policy compliance to the input providers, and fairness (i.e., either all or no party gets the output) to the output recipients, without requiring these parties to trust each other or the compute host.

Recently, trusted execution environments (TEEs), such as Intel SGX and Sanctum enclaves, are finding applications in outsourced computing on sensitive data. However, since TEEs are at the mercy of an untrusted host for storage and network communication, they are incapable of enforcing history-dependent policies or fairness. For instance, against a user's wish that only an aggregate function over her entire data is revealed, an adversarial host can repeatedly evaluate that aggregate function on different subsets of her dataset, and learn the individual records. The adversary may also collude and deliver the output only to a subset of the output recipients, thus violating fairness.

This paper presents LucidiTEE, the first system to enable multiple parties to jointly compute on large-scale private data, while guaranteeing that the aforementioned policies are enforced even when the input providers are offline, and guaranteeing fairness to all output recipients. To that end, LucidiTEE develops a set of novel protocols between a network of TEEs and a distributed, shared, append-only ledger. LucidiTEE uses the ledger only to enforce policies; it does not store inputs, outputs, or state on the ledger, nor does it duplicate execution amongst the participants, which allows it to scale to large data and large number of parties.

We demonstrate several policy-based applications including personal finance, federated machine learning, fair $n$-party information exchange, and private set intersection for medical records.
Expand
E.V. Flynn, Yan Bo Ti
ePrint Report ePrint Report
We study $(\ell,\ell)$-isogeny graphs of principally polarised supersingular abelian surfaces (PPSSAS). The $(\ell,\ell)$-isogeny graph has cycles of small length that can be used to break the collision resistance assumption of the genus two isogeny hash function suggested by Takashima. Algorithms for computing $(2,2)$-isogenies on the level of Jacobians and $(3,3)$-isogenies on the level of Kummers are used to develop a genus two version of the supersingular isogeny Diffie--Hellman protocol of Jao and de~Feo. The genus two isogeny Diffie--Hellman protocol achieves the same level of security as SIDH but uses a prime with a third of the bit length.
Expand
Nicholas Genise, Craig Gentry, Shai Halevi, Baiyu Li, Daniele Micciancio
ePrint Report ePrint Report
We describe a somewhat homomorphic GSW-like encryption scheme, natively encrypting matrices rather than just single elements. This scheme offers much better performance than existing homomorphic encryption schemes for evaluating encrypted (nondeterministic) finite automata (NFAs). Differently from GSW, we do not know how to reduce the security of this scheme to LWE, instead we reduce it to a stronger assumption, that can be thought of as an inhomogeneous variant of the NTRU assumption. This assumption (that we term iNTRU) may be useful and interesting in its own right, and we examine a few of its properties. We also examine methods to encode regular expressions as NFAs, and in particular explore a new optimization problem, motivated by our application to encrypted NFA evaluation. In this problem, we seek to minimize the number of states in an NFA for a given expression, subject to the constraint on the ambiguity of the NFA.
Expand
Satrajit Ghosh, Mark Simkin
ePrint Report ePrint Report
Threshold private set intersection enables Alice and Bob who hold sets $A$ and $B$ of size $n$ to compute the intersection $A \cap B$ if the sets do not differ by more than some threshold parameter $t$. In this work, we investigate the communication complexity of this problem and we establish the first upper and lower bounds. We show that any protocol has to have a communication complexity of $\Omega(t)$. We show that an almost matching upper bound of $\tilde{\mathcal{O}}(t)$ can be obtained via fully homomorphic encryption. We present a computationally more efficient protocol based on weaker assumptions, namely additively homomorphic encryption, with a communication complexity of $\tilde{\mathcal{O}}(t^2)$. For applications like biometric authentication, where a given fingerprint has to have a large intersection with a fingerprint from a database, our protocols may result in significant communication savings.

Prior to this work, all previous protocols had a communication complexity of $\Omega(n)$. Our protocols are the first ones with communication complexities that mainly depend on the threshold parameter $t$ and only logarithmically on the set size $n$.
Expand
Kasper Green Larsen, Mark Simkin
ePrint Report ePrint Report
A secret sharing scheme allows a dealer to distribute shares of a secret among a set of $n$ parties $P=\{p_1,\dots,p_n\}$ such that any authorized subset of parties can reconstruct the secret, yet any unauthorized subset learns nothing about it. The family $\mathcal{A} \subseteq 2^P$ of all authorized subsets is called the access structure. Classic results show that if $\mathcal{A}$ contains precisely all subsets of cardinality at least $t$, then there exists a secret sharing scheme where the length of the shares is proportional to $\lg n$ bits plus the length of the secret. However, for general access structures, the best known upper bounds have shares of length exponential in $n$, whereas the strongest lower bound shows that the shares must have length at least $n/\log n$. Beimel conjectured that the exponential upper bound is tight, but proving it has so far resisted all attempts. In this paper, we almost prove Beimel's conjecture by showing that there exists an access structure $\mathcal{A}$, such that any secret sharing scheme for $\mathcal{A}$ must have either exponential share length, or the function used for reconstructing the secret by authorized parties must have an exponentially long description. As an example corollary, we conclude that if one insists that authorized parties can reconstruct the secret via a constant fan-in boolean circuit of size polynomial in the share length, then there exists an access structure that requires a share length that is exponential in $n$.
Expand
Vanesa Daza, Alonso González, Zaira Pindado, Carla Ràfols, Javier Silva
ePrint Report ePrint Report
Despite recent advances in the area of pairing-friendly Non-Interactive Zero-Knowledge proofs, there have not been many efficiency improvements in constructing arguments of satisfiability of quadratic (and larger degree) equations since the publication of the Groth-Sahai proof system (JoC'12). In this work, we address the problem of aggregating such proofs using techniques derived from the interactive setting and recent constructions of SNARKs. For certain types of quadratic equations, this problem was investigated before by González et al. (ASIACRYPT'15). Compared to their result, we reduce the proof size by approximately 50% and the common reference string from quadratic to linear, at the price of using less standard computational assumptions. A theoretical motivation for our work is to investigate how efficient NIZK proofs based on falsifiable assumptions can be. On the practical side, quadratic equations appear naturally in several cryptographic schemes like shuffle and range arguments.
Expand
Danping Shi, Siwei Sun, Yu Sasaki, Chaoyun Li, Lei Hu
ePrint Report ePrint Report
We show that the correlation of any quadratic Boolean function can be read out from its so-called disjoint quadratic form. We further propose a polynomial-time algorithm that can transform an arbitrary quadratic Boolean function into its disjoint quadratic form. With this algorithm, the exact correlation of quadratic Boolean functions can be computed efficiently.

We apply this method to analyze the linear trails of MORUS (one of the seven finalists of the CAESAR competition), which are found with the help of a generic model for linear trails of MORUS-like key-stream generators. In our model, any tool for finding linear trails of block ciphers can be used to search for trails of MORUS-like key-stream generators. As a result, a set of trails with correlation $2^{-38}$ is identified for all versions of full MORUS, while the correlations of previously published best trails for MORUS-640 and MORUS-1280 are $2^{-73}$ and $2^{-76}$ respectively (ASIACRYPT 2018). This significantly improves the complexity of the attack on MORUS-1280-256 from $2^{152}$ to $2^{76}$. These new trails also lead to the first distinguishing and message-recovery attacks on MORUS-640-128 and MORUS-1280-128 with surprisingly low complexities around $2^{76}$.

Moreover, we observe that the condition for exploiting these trails in an attack can be more relaxed than previously thought, which shows that the new trails are superior to previously published ones in terms of both correlation and the number of ciphertext blocks involved.
Expand

23 February 2019

Hyderabad, INDIA, 16 December - 20 December 2019
Event Calendar Event Calendar
Event date: 16 December to 20 December 2019
Submission deadline: 12 July 2019
Notification: 25 September 2019
Expand
Fuhou, China, 25 October - 27 October 2019
Event Calendar Event Calendar
Event date: 25 October to 27 October 2019
Submission deadline: 6 May 2019
Notification: 8 July 2019
Expand

21 February 2019

Department of Engineering at Aarhus University, Denmark
Job Posting Job Posting
Applications are invited for a PhD fellowship/scholarship at the Department of Engineering, Aarhus University, Denmark. The position is available from 1 August 2019 or later.

Title: Verifiable cryptographic software

Zero-knowledge proofs are integral for deploying privacy-preserving cryptocurrencies and other blockchain applications, as they represent a fundamental building block for proving statements about confidential data. The most popular framework for such proofs is based on cryptographic pairings defined over elliptic curves, where pairing-based zero-knowledge Succinct Non-Interactive Arguments of Knowledge (zk-SNARKs) underlie private transactions.

The PhD candidate will investigate techniques to develop a formally verified efficient software library for pairing-based cryptography, as means to support current blockchain projects relying on zero-knowledge proofs. The PhD candidate will also be involved in other educational activities, such as serving as teaching assistant in courses related to his/her expertise.

The project is a collaboration between researchers from the Departments of Engineering and Computer Science; the DIGIT Centre for Digitalisation, Big Data and Data Analytics; and the recently opened Concordium Blockchain Research Center.

Qualifications:

We are looking for dedicated and enthusiastic applicants, preferably with a Master’s degree in Computer Science/Engineering, Mathematics or related discipline. A theoretical background with cryptography or formal verification will be important for the project. Practical experience with software development and the Coq proof assistant will be seen as a plus. Analytical and critical thinking are naturally essential to pursuing a PhD degree. Further requirements are fluency in English, good reporting/organization skills and being able to work independently.

Closing date for applications: 1 May 2019

Contact: Diego F. Aranha, Assistant Professor, dfaranha (at) eng.au.dk; or Bas Spitters, Associate Professor, spitters (at) cs.au.dk

More information: https://phd.scitech.au.dk/for-applicants/apply-here/may-2019/verifiable-cryptographic-software/

Expand
◄ Previous Next ►