International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

27 November 2017

Virendra Sule, Anmol Yadav
ePrint Report ePrint Report
Cryptanalysis of Bivium is presented with the help of a new Boolean system solver algorithm. This algorithm uses a Boolean equation model of Bivium for a known keystream. The Boolean solver uses implicant based computation of satisfying assignments and is distinct from well known CNF-satisfiability solvers or algebraic cryptanalysis methods. The solver is also inherently parallel and returns all satisfying assignments of the system of equations in terms of implicants. Cryptanalysis of Bivium is classified in four categories of increasing strength and it is shown that the solver algorithm is able to complete the key recovery in category 2 in $48$ hours. Computational algorithms for formation of equations and symbolic operations are also developed afresh for handling Boolean functions and presented. Limitations of these implementations are determined with respect to Bivium model and its cryptanalysis which shall be useful for cryptanalysis of general stream ciphers.
Expand
Robin Kwant, Tanja Lange, Kimberley Thissen
ePrint Report ePrint Report
This paper studies ways to backdoor lattice-based systems following Young and Yung's work on backdooring RSA and discrete-log based systems. For the NTRU encryption scheme we show how to build a backdoor and to change the system so that each ciphertext leaks information about the plaintext to the owner of the backdoor. For signature schemes the backdoor leaks information about the signing key to the backdoor owner.

As in Young and Yung's work the backdoor uses the freedom that random selections offer in the protocol to hide a secret message encrypted to the backdoor owner. The most interesting and very different part though is how to hide and retrieve the hidden messages.
Expand
Leif Both, Alexander May
ePrint Report ePrint Report
We propose a new algorithm for the decoding of random binary linear codes of dimension $n$ that is superior to previous algorithms for high error rates. In the case of Full Distance decoding, the best known bound of $2^{0.0953n}$ is currently achieved via the BJMM-algorithm of Becker, Joux, May and Meurer. Our algorithm significantly improves this bound down to $2^{0.0886n}$.

Technically, our improvement comes from the heavy use of Nearest Neighbor techniques in all steps of the construction, whereas the BJMM-algorithm can only take advantage of Nearest Neighbor search in the last step.

Since cryptographic instances of LPN usually work in the high error regime, our algorithm has implications for LPN security.
Expand
Justin Holmgren, Lisa Yang
ePrint Report ePrint Report
We give a three-player game whose non-signaling value is constant (2/3) under any number of parallel repetitions. This is the first known setting where parallel repetition completely fails to reduce the maximum winning probability of computationally unbounded players.

We also show that the best known results on non-signaling parallel repetition apply to a relatively limited class of games. In particular, these games cannot yield log-prover MIPs for languages beyond PSPACE.
Expand
Daniel Kales, Maria Eichlseder, Florian Mendel
ePrint Report ePrint Report
Authenticated ciphers rely on the uniqueness of the nonces to meet their security goals. In this work, we investigate the implications of reusing nonces for three third-round candidates of the ongoing CAESAR competition, namely Tiaoxin, AEGIS and MORUS. We show that an attacker that is able to force nonces to be reused can reduce the security of the ciphers with results ranging from full key-recovery to forgeries with practical complexity and a very low number of nonce-misuse queries.
Expand
Maria Eichlseder, Daniel Kales
ePrint Report ePrint Report
The TWEAKEY/STK construction is an increasingly popular approach for designing tweakable block ciphers that notably uses a linear tweakey schedule. Several recent attacks have analyzed the implications of this approach for differential cryptanalysis and other attacks that can take advantage of related tweakeys. We generalize the clustering approach of a recent differential attack on the tweakable block cipher MANTIS-5 and describe a tool for efficiently finding and evaluating such clusters. More specifically, we consider the set of all differential characteristics compatible with a given truncated characteristic, tweak difference, and optional constraints for the differential. We refer to this set as a semi-truncated characteristic and estimate its probability by analyzing the distribution of compatible differences at each step.

We apply this approach to find a semi-truncated differential characteristic for MANTIS-6 with probability about $2^{-67.73}$ and derive a key-recovery attack with a complexity of about $2^{53.94}$ chosen-plaintext queries and computations. The data-time product is $2^{107.88} \ll 2^{126}$.
Expand
Albrecht Petzoldt
ePrint Report ePrint Report
The HFEv- signature scheme is one of the most promising candidates for post-quantum digital signatures. Most notably here is the short signature size of the scheme. It has long been known that direct attacks against HFEv- systems work more efficiently than against random systems. The reason for this was found by Jintai Ding et al., who proved an upper bound on the degree of regularity of these systems. However, not much is known about the efficiency of the hybrid approach against the HFEv- scheme. In order to find suitable parameter sets for HFEv- for higher levels of security, this topic has to be studied in more detail. In this article we consider this question by performing a large number of computer experiments. As our experiments show, guessing variables does not help to speed up direct attacks against HFEv- systems. Therefore, in the parameter selection of these schemes, we do not have to consider the hybrid approach. Furthermore, we develop in this article a simple formula to estimate the degree of regularity of a determined HFEv- system. Together with our results on the behavior of the hybrid approach, this formula gives us an easy way to estimate the complexity of direct attacks against HFEv- systems.
Expand
Jeroen Delvaux
ePrint Report ePrint Report
A physically unclonable function (PUF) is a circuit of which the input–output behavior is designed to be sensitive to the random variations of its manufacturing process. This building block hence facilitates the authentication of any given device in a population of identically laid-out silicon chips, similar to the biometric authentication of a human. The focus and novelty of this work is the development of efficient impersonation attacks on the following two PUF-based authentication protocols: (1) the protocol of Ye, Hu, and Li, as presented at AsianHOST 2016, and (2) the protocol of Gao, Ma, Al-Sarawi, Abbott, and Ranasinghe, as published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2017.
Expand
Anton Muehlemann
ePrint Report ePrint Report
The wisdom of the crowd is a valuable asset in today's society. It is not only important in predicting elections but also plays an essential r\^ole in marketing and the financial industry. Having a trustworthy source of opinion can make forecasts more accurate and markets predictable. Until now, a fundamental problem of surveys is the lack of incentives for participants to provide accurate information. Classical solutions like small monetary rewards or the chance of winning a prize are often not very attractive and also do not prevent multiple entries or randomly filling in answers. In this work, we present a framework that solves both problems via a customizable incentivization framework. Apart from predicting events, this framework can also be used to govern decentralized autonomous organizations.
Expand
Riad S. Wahby, Ioanna Tzialla, abhi shelat, Justin Thaler, Michael Walfish
ePrint Report ePrint Report
We present a zero-knowledge argument for NP with low communication complexity, low concrete cost for both the prover and the verifier, and no trusted setup, based on standard cryptographic assumptions (DDH). Specifically, communication is proportional to the square root of the size of the witness, plus $d\cdot\log(G)$ where $d$ is the depth and $G$ is the width of the verifying circuit. When applied to batched or data-parallel statements, the prover's cost is linear and the verifier's cost is sub-linear in the verifying circuit size, both with good constants. Together, these properties represent a new point in the tradeoffs among setup, complexity assumptions, proof size, and computational cost.

Our argument is public coin, so we apply the Fiat-Shamir heuristic to produce a zero-knowledge succinct non-interactive argument of knowledge (zkSNARK), which we call Hyrax. We evaluate Hyrax on three benchmarks, SHA-256 Merkle trees, image transformation, and matrix multiplication. We find that Hyrax scales to 6--27$\times$ larger circuit sizes than a highly-optimized prior system, and that its proofs are 2--10$\times$ smaller than prior work with similar properties.
Expand
Mikhail Anokhin
ePrint Report ePrint Report
Let $\mathbb G_n$ be the subgroup of elements of odd order in the group $\mathbb Z_n^\star$ and let $\mathcal U(\mathbb G_n)$ be the uniform probability distribution on $\mathbb G_n$. In this paper, we establish a probabilistic polynomial-time reduction from finding a nontrivial divisor of a composite number $n$ to finding a nontrivial relation between elements chosen independently and uniformly at random from $\mathbb G_n$. Assume that finding a nontrivial divisor of a random number in some set $N$ of composite numbers (for a given security parameter) is a computationally hard problem. Then, using the above-mentioned reduction, we prove that the family $\{(\mathbb G_n,\mathcal U(\mathbb G_n))\}_{n\in N}$ of computational abelian groups is weakly pseudo-free. The disadvantage of this result is that the probability ensemble $\{\mathcal U(\mathbb G_n)\}_{n\in N}$ is not polynomial-time samplable. To overcome this disadvantage, we construct a polynomial-time computable function $\nu\colon D\to N$ (where $D\subseteq\{0,1\}^*$) and a polynomial-time samplable probability ensemble $\{\mathcal G_d\}_{d\in D}$ (where $\mathcal G_d$ is a distribution on $\mathbb G_{\nu(d)}$ for each $d\in D$) such that the family $\{(\mathbb G_{\nu(d)},\mathcal G_d)\}_{d\in D}$ of computational abelian groups is weakly pseudo-free.
Expand
Daniel Jost, Ueli Maurer, Joao L. Ribeiro
ePrint Report ePrint Report
Information theoretically secure secret-key exchange between two parties, Alice and Bob, is a well-studied problem that is provably impossible without additional assumptions. However, it has shown to be feasible in a model where -- in addition to an authenticated communication channel -- the parties also have access to some correlated randomness. One particular type of such correlated randomness is the so-called satellite setting, where a source of uniform random bits (e.g. sent by a satellite) is received by the parties and the eavesdropper, Eve, via antennas of different sizes, which is modeled as receiving the bits through independent binary symmetric channels with error probabilities $\epsilon_A$, $\epsilon_B$, and $\epsilon_E$, respectively, where typically $\epsilon_E\ll\epsilon_A\approx\epsilon_B$. The secret-key rate is then defined as the maximal rate, per random bit, at which Alice and Bob can agree on secret key bits about which Eve has arbitrarily little information.

While in computational cryptography the relevant parameter in a security analysis is a bound on Eve's computing power, the corresponding quantity in the satellite setting is a bound on Eve's antenna size. In this work, we study the optimal secret-key rate in the satellite setting as a function of the ratio $Q$ of Eve's tolerable antenna size and the honest parties' antenna size. Technically, we consider the ratio $Q$ of the capacities of the corresponding binary symmetric channels, which corresponds roughly to the antenna size ratio, and we consider the satellite's sending signal strength as a system design parameter that can be optimized. This setting was first considered by Gander and Maurer (ISIT 1994), who conjectured based on numerical evidence that the secret-key rate of the parity-check protocol decreases like $1/Q^2$.

As a first contribution, we prove that this is actually the case, and also prove that this rate is asymptotically optimal, i.e., no protocol can achieve an asymptotically better rate in a setting where $\epsilon_A \approx \epsilon_B$. As a second contribution, we consider the secret-key rate per second rather than per transmitted bit, which might be of higher practical interest, given that one particular way of adjusting the signal strength is to adjust the bit-rate at which the satellite broadcasts. To this end, we introduce a quantity that approximates the secret-key rate per second, prove that for the parity-check protocol this quantity decreases like $1/Q$, and prove again that this is optimal. The difference between quadratic and linear decrease is quite significant in the satellite setting because it is plausible for Eve's antenna to be orders of magnitude larger than Alice's and Bob's.
Expand
Mariya Bessonov, Dima Grigoriev, Vladimir Shpilrain
ePrint Report ePrint Report
We offer a probabilistic solution of Yao's millionaires' problem that gives correct answer with probability (slightly) less than 1 but on the positive side, this solution does not use any one-way functions.
Expand
Meenakshi Kansal, Ratna Dutta, Sourav Mukhopadhyay
ePrint Report ePrint Report
Secret key exposure is at high risk in the computing infrastructure due to the increase in use of harmful devices. As a result, achieving forward secrecy is a preferable feature for any cryptosystem where the lifetime of a user is divided into discrete time periods. Forward secrecy preserves the security of past periods even if the secret key is exposed. In this work, we introduce the first lattice based forward secure dynamic group signature scheme. The existing forward secure group signature schemes are secure in the bilinear setting, and becomes insecure in the quantum computer period. We employ a complete binary tree whose leaves are associated with discrete time periods and label the nodes in a unique way that enables each node of the same depth to have different hamming weight. This helps the group manager to produce distinct certificates to distinct users. Our scheme withstand framing attacks, mis-identification attack and preserves anonymity under the learning with errors (LWE) and short integer solution (SIS) assumptions.
Expand

24 November 2017

Dana Dachman-Soled, Huijing Gong, Mukul Kulkarni, Aria Shahverdi
ePrint Report ePrint Report
We consider the leakage resilience of the Ring-LWE analogue of the Dual-Regev encryption scheme (R-Dual-Regev for short), originally presented by Lyubashevsky et al. (Eurocrypt '13). Specifically, we would like to determine whether the R-Dual-Regev encryption scheme remains IND-CPA secure, even in the case where an attacker leaks information about the secret key.

We consider the setting where $R$ is the ring of integers of the $m$-th cyclotomic number field, for $m$ which is a power-of-two, and the Ring-LWE modulus is set to $q \equiv 1 \mod m$. This is the common setting used in practice and is desirable in terms of the efficiency and simplicity of the scheme. Unfortunately, in this setting $R_q$ is very far from being a field so standard techniques for proving leakage resilience in the general lattice setting, which rely on the leftover hash lemma, do not apply. Therefore, new techniques must be developed.

In this work, we put forth a high-level approach for proving the leakage resilience of the R-Dual-Regev scheme, by generalizing the original proof of Lyubashevsky et al. (Eurocrypt '13). We then give three instantiations of our approach, proving that the R-Dual-Regev remains IND-CPA secure in the presence of three natural, non-adaptive leakage classes.
Expand
Robin Ankele, Andrew Simpson
ePrint Report ePrint Report
It is well understood that the huge volumes of data captured in recent years have the potential to underpin significant research developments in many fields. But, to realise these benefits, all relevant parties must be comfortable with how this data is shared. At the heart of this is the notion of privacy — which is recognised as being somewhat difficult to define. Previous authors have shown how privacy notions such as anonymity, unlinkability and pseudonymity might be combined into a single formal framework. In this paper we use and extend this work by defining privacy games for individual and group privacy within distributed environments. More precisely, for each privacy notion, we formulate a game that an adversary has to win in order to break the notion. Via these games, we aim to clarify understanding of, and relationships between, different privacy notions; we also aim to give an unambiguous understanding of adversarial actions. Additionally, we extend previous work via the notion of unobservability.
Expand
Fabrice Benhamouda, Huijia Lin
ePrint Report ePrint Report
We present new constructions of round-efficient, or even round-optimal, Multi-Party Computation (MPC) protocols from Oblivious Transfer (OT) protocols. Our constructions establish a tight connection between MPC and OT: In the setting of semi-honest security, for any $k \ge 2$, $k$-round semi-honest OT is necessary and complete for $k$-round semi-honest MPC. In the round-optimal case of $k = 2$, we obtain 2-round semi-honest MPC from 2-round semi-honest OT, resolving the round complexity of semi-honest MPC assuming weak and necessary assumption. In comparison, previous 2-round constructions rely on either the heavy machinery of indistinguishability obfuscation or witness encryption, or the algebraic structure of bilinear pairing groups. More generally, for an arbitrary number of rounds $k$, all previous constructions of $k$-round semi-honest MPC require at least OT with $k'$ rounds for $k' \le \lfloor k/2 \rfloor$.

In the setting of malicious security, we show: For any $k \ge 5$, $k$-round malicious OT is necessary and complete for $k$-round malicious MPC. In fact, OT satisfying a weaker notion of delayed-semi-malicious security suffices. In the common reference string model, for any $k \ge 2$, we obtain $k$-round malicious Universal Composable (UC) protocols from any $k$-round semi-malicious OT and non-interactive zero-knowledge. Previous 5-round protocols in the plain model, and 2-round protocols in the common reference string model all require algebraic assumptions such as DDH or LWE.

At the core of our constructions is a new framework for garbling interactive circuits. Roughly speaking, it allows for garbling interactive machines that participates in interactions of a special form. The garbled machine can emulate the original interactions receiving messages sent in the clear (without being encoded using secrets), and reveals only the transcript of the interactions, provided that the transcript is computationally uniquely defined. We show that garbled interactive circuits for the purpose of constructing MPC can be implemented using OT. Along the way, we also propose a new primitive of witness selector that strengthens witness encryption, and a new notion of zero-knowledge functional commitments.
Expand
Nicola Atzei, Massimo Bartoletti, Stefano Lande, Roberto Zunino
ePrint Report ePrint Report
We propose a formal model of Bitcoin transactions, which is sufficiently abstract to enable formal reasoning, and at the same time is concrete enough to serve as an alternative documentation to Bitcoin. We use our model to formally prove some well-formedness properties of the Bitcoin blockchain, for instance that each transaction can only be spent once. We release an open-source tool through which programmers can write transactions in our abstract model, and compile them into standard Bitcoin transactions.
Expand
Cecilia Boschini, Jan Camenisch, Gregory Neven
ePrint Report ePrint Report
Higher-level cryptographic privacy-enhancing protocols such as anonymous credentials, voting schemes, and e-cash are often constructed by suitably combining signature, commitment, and encryption schemes with zero-knowledge proofs. Indeed, a large body of protocols have been constructed in that manner from Camenisch-Lysyanskaya signatures and generalized Schnorr proofs. In this paper, we build a similar framework for lattice-based schemes by presenting a signature and commitment scheme that are compatible with Lyubashevsky's Fiat-Shamir proofs with abort, currently the most efficient zero-knowledge proofs for lattices. To cope with the relaxed soundness guarantees of these proofs, we define corresponding notions of relaxed signature and commitment schemes. We demonstrate the flexibility and efficiency of our new primitives by constructing a new lattice-based anonymous attribute token scheme and providing concrete parameters to securely instantiate this scheme.
Expand
Martin Ekerå
ePrint Report ePrint Report
Ekerå and Håstad recently introduced a quantum algorithm for computing short discrete logarithms. To compute an m bit logarithm d this algorithm exponentiates group elements in superposition to m+2m/s bit exponents for s >= 1 an integer. Given a set of s good outputs, a classical post-processing algorithm then recovers d by enumerating an s+1-dimensional lattice.

Increasing s trades work in the quantum algorithm for classical work. However, as good outputs cannot be trivially distinguished, the quantum algorithm needs to be run s/p times and all subsets of s outputs exhaustively post-processed, where p is the probability of observing a good output.

In this paper, we introduce an improved post-processing algorithm that removes the need to exhaustively enumerate exponential in s many s+1-dimensional lattices, at the expense of reducing a single n+1 dimensional lattice and applying Babai's algorithm. The new algorithm is practical for much greater values of s and in general requires fewer runs than the original algorithm.

As a concrete example, for m = 1024, s = 30 and n = 40, the algorithm recovers d within a few seconds with approximately 99% success probability.
Expand
◄ Previous Next ►