International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

12 April 2021

Benny Applebaum, Oded Nir
ePrint Report ePrint Report
A secret-sharing scheme allows to distribute a secret $s$ among $n$ parties such that only some predefined ``authorized'' sets of parties can reconstruct the secret, and all other ``unauthorized'' sets learn nothing about $s$. The collection of authorized/unauthorized sets can be captured by a monotone function $f:\{0,1\}^n\rightarrow \{0,1\}$. In this paper, we focus on monotone functions that all their min-terms are sets of size $a$, and on their duals -- monotone functions whose max-terms are of size $b$. We refer to these classes as $(a,n)$-upslices and $(b,n)$-downslices, and note that these natural families correspond to monotone $a$-regular DNFs and monotone $(n-b)$-regular CNFs. We derive the following results.

1. (General downslices) Every downslice can be realized with total share size of $1.5^{n+o(n)}<2^{0.585 n}$. Since every monotone function can be cheaply decomposed into $n$ downslices, we obtain a similar result for general access structures improving the previously known $2^{0.637n+o(n)}$ complexity of Applebaum, Beimel, Nir and Peter (STOC 2020). We also achieve a minor improvement in the exponent of linear secrets sharing schemes.

2. (Random mixture of upslices) Following Beimel and Farras (TCC 2020) who studied the complexity of random DNFs with constant-size terms, we consider the following general distribution $F$ over monotone DNFs: For each width value $a\in [n]$, uniformly sample $k_a$ monotone terms of size $a$, where $k=(k_1,\ldots,k_n)$ is an arbitrary vector of non-negative integers. We show that, except with exponentially small probability, $F$ can be realized with share size of $2^{0.5 n+o(n)}$ and can be linearly realized with an exponent strictly smaller than $2/3$. Our proof also provides a candidate distribution for ``exponentially-hard'' access structure. We use our results to explore connections between several seemingly unrelated questions about the complexity of secret-sharing schemes such as worst-case vs. average-case, linear vs. non-linear and primal vs. dual access structures. We prove that, in at least one of these settings, there is a significant gap in secret-sharing complexity.
Expand
Danilo Gligoroski
ePrint Report ePrint Report
The algebraic structures that are non-commutative and non-associative known as entropic groupoids that satisfy the "Palintropic" property i.e., $x^{\mathbf{A} \mathbf{B}} = (x^{\mathbf{A}})^{\mathbf{B}} = (x^{\mathbf{B}})^{\mathbf{A}} = x^{\mathbf{B} \mathbf{A}}$ were proposed by Etherington in '40s from the 20th century. Those relations are exactly the Diffie-Hellman key exchange protocol relations used with groups. The arithmetic for non-associative power indices known as Logarithmetic was also proposed by Etherington and later developed by others in the 50s-70s. However, as far as we know, no one has ever proposed a succinct notation for exponentially large non-associative power indices that will have the property of fast exponentiation similarly as the fast exponentiation is achieved with ordinary arithmetic via the consecutive rising to the powers of two.

In this paper, we define ringoid algebraic structures $(G, \boxplus, *)$ where $(G, \boxplus) $ is an Abelian group and $(G, *)$ is a non-commutative and non-associative groupoid with an entropic and palintropic subgroupoid which is a quasigroup, and we name those structures as Entropoids. We further define succinct notation for non-associative bracketing patterns and propose algorithms for fast exponentiation with those patterns.

Next, by an analogy with the developed cryptographic theory of discrete logarithm problems, we define several hard problems in Entropoid based cryptography, such as Discrete Entropoid Logarithm Problem (DELP), Computational Entropoid Diffie-Hellman problem (CEDHP), and Decisional Entropoid Diffie-Hellman Problem (DEDHP). We post a conjecture that DEDHP is hard in Sylow $q$-subquasigroups. Next, we instantiate an entropoid Diffie-Hellman key exchange protocol. Due to the non-commutativity and non-associativity, the entropoid based cryptographic primitives are supposed to be resistant to quantum algorithms. At the same time, due to the proposed succinct notation for the power indices, the communication overhead in the entropoid based Diffie-Hellman key exchange is very low: for 128 bits of security, 64 bytes in total are communicated in both directions, and for 256 bits of security, 128 bytes in total are communicated in both directions.

Our final contribution is in proposing two entropoid based digital signature schemes. The schemes are constructed with the Fiat-Shamir transformation of an identification scheme which security relies on a new hardness assumption: computing roots in finite entropoids is hard. If this assumption withstands the time's test, the first proposed signature scheme has excellent properties: for the classical security levels between 128 and 256 bits, the public and private key sizes are between 32 and 64, and the signature sizes are between 64 and 128 bytes. The second signature scheme reduces the finding of the roots in finite entropoids to computing discrete entropoid logarithms. In our opinion, this is a safer but more conservative design, and it pays the price in doubling the key sizes and the signature sizes.

We give a proof-of-concept implementation in SageMath 9.2 for all proposed algorithms and schemes in an appendix.
Expand
Coşku Acay, Rolph Recto, Joshua Gancher, Andrew C. Myers, Elaine Shi
ePrint Report ePrint Report
Modern distributed systems involve interactions between principals with limited trust, so cryptographic mechanisms are needed to protect confidentiality and integrity. At the same time, most developers lack the training to securely employ cryptography. We present Viaduct, a compiler that transforms high-level programs into secure, efficient distributed realizations. Viaduct's source language allows developers to declaratively specify security policies by annotating their programs with information flow labels. The compiler uses these labels to synthesize distributed programs that use cryptography efficiently while still defending the source-level security policy. The Viaduct approach is general, and can be easily extended with new security mechanisms.

Our implementation of the Viaduct compiler comes with an extensible runtime system that includes plug-in support for multiparty computation, commitments, and zero-knowledge proofs. We have evaluated the system on a set of benchmarks, and the results indicate that our approach is feasible and can use cryptography in efficient, nontrivial ways.
Expand
Chris Brzuska, Antoine Delignat-Lavaud, Christoph Egger, Cédric Fournet, Konrad Kohbrok, Markulf Kohlweiss
ePrint Report ePrint Report
We analyze the security of the TLS 1.3 key establishment protocol, as specified at the end of its rigorous standardization process. We define a core key-schedule and reduce its security to concrete assumptions against an adversary that controls client and server configurations and adaptively chooses some of their keys. Our model supports all key derivations featured in the standard, including its negotiated modes and algorithms that combine an optional Diffie-Hellman exchange for forward secrecy with optional pre-shared keys supplied by the application or recursively established in prior sessions. We show that the output keys are secure as soon as any of their input key materials are. Our compositional, code-based proof makes use of state separation to yield concrete reductions despite the complexity of the key schedule. We also discuss (late) changes to the standard that would improve its robustness and simplify its analysis.
Expand
Michele Fabbrini
ePrint Report ePrint Report
In this paper I propose a new key agreement scheme applying a well-known property of powers to a particular couple of elements of the cyclic group generated by a primitive root of a prime p. The model, whose security relies on the difficulty of computing discrete logarithms when p is a “safe prime”, consists of a five-step process providing explicit key authentication.
Expand
Daniel Brown, Neal Koblitz, Jason LeGrow
ePrint Report ePrint Report
In a recent eprint, Rahman and Shpilrain proposed a Diffie-Hellman style key exchange based on a semidirect product of $n × n$-matrices over a finite field. We show that, using public information, an adversary can recover the agreed upon secret key by solving a system of $n^2$ linear equations.
Expand
Gregor Haas, Seetal Potluri, Aydin Aysu
ePrint Report ePrint Report
This paper proposes the first cache timing side-channel attacks on one of Apple's mobile devices. Utilizing a recent, permanent exploit named checkm8, we reverse-engineered Apple's BootROM and created a powerful toolkit for running arbitrary hardware security experiments on Apple's in-house designed ARM systems-on-a-chip (SoC). We integrate two additional open-source tools to enhance our own toolkit, further increasing its capability for hardware security research. Using this toolkit, which is a core contribution of our work, we then implement both time-driven and access-driven cache timing attacks as proof-of-concept illustrators. In both cases, we propose statistical innovations which further the state-of-the-art in cache timing attacks. We find that our access-driven attack, at best, can reduce the security of OpenSSL AES-128 to merely 25 bits, while our time-driven attack (with a much weaker adversary) can reduce it to 48 bits. We also quantify that access-driven attacks on the A10 which do not use our statistical improvements are unable to deduce the key, and that our statistical technique reduces the traces needed by the typical time-driven attacks by 21.62 million.
Expand
Andreas Wiemers, Johannes Mittmann
ePrint Report ePrint Report
Recent publications consider side-channel attacks against the key schedule of the Data Encryption Standard (DES). These publications identify a leakage model depending on the XOR of register values in the DES key schedule. Building on this leakage model, we first revisit a discrete model which assumes that the Hamming distances between subsequent round keys leak without error. We analyze this model formally and provide theoretical explanations for observations made in previous works. Next we examine a continuous model which considers more points of interest and also takes noise into account. The model gives rise to an evaluation function for key candidates and an associated notion of key ranking. We develop an algorithm for enumerating key candidates up to a desired rank which is based on the Fincke–Pohst lattice point enumeration algorithm. We derive information-theoretic bounds and estimates for the remaining entropy and compare them with our experimental results. We apply our attack to side-channel measurements of a security controler. Using our enumeration algorithm we are able to significantly improve the results reported previously for the same measurement data.
Expand
James Howe, Thomas Prest, Daniel Apon
ePrint Report ePrint Report
Post-quantum cryptography has known a Cambrian explosion in the last decade. What started as a very theoretical and mathematical area has now evolved into a sprawling research field, complete with side-channel resistant embedded implementations, large scale deployment tests and standardization efforts. This study systematizes the current state of knowledge on post-quantum cryptography. Compared to existing studies, we adopt a transversal point of view and center our study around three areas: (i) paradigms, (ii) implementation, (iii) deployment. Our point of view allows to cast almost all classical and post-quantum schemes into just a few paradigms. We highlight trends, common methodologies, and pitfalls to look for and recurrent challenges.
Expand
Aein Rezaei Shahmirzadi, Amir Moradi
ePrint Report ePrint Report
Masking schemes are among the most popular countermeasures against Side-Channel Analysis (SCA) attacks. Realization of masked implementations on hardware faces several difficulties including dealing with glitches. Threshold Implementation (TI) is known as the first strategy with provable security in presence of glitches. In addition to the desired security order d, TI defines the minimum number of shares to also depend on the algebraic degree of the target function. This may lead to unaffordable implementation costs for higher orders. For example, at least five shares are required to protect the smallest nonlinear function against second-order attacks. By cutting such a dependency, the successor schemes are able to achieve the same security level by just $d+1$ shares, at the cost of high demand for fresh randomness, particularly at higher orders. In this work, we provide a methodology to realize the second-order glitch-extended probing-secure implementation of a group of quadratic functions with three shares and no fresh randomness. This allows us to construct second-order secure implementations of several cryptographic primitives with very limited number of fresh masks, including Keccak, SKINNY, Midori, PRESENT, and PRINCE.
Expand

09 April 2021

Oleksiy Lisovets, David Knichel, Thorben Moos, Amir Moradi
ePrint Report ePrint Report
In recent years, smartphones have become an increasingly important storage facility for personal sensitive data ranging from photos and credentials up to financial and medical records like credit cards and person’s diseases. Trivially, it is critical to secure this information and only provide access to the genuine and authenticated user. Smartphone vendors have already taken exceptional care to protect user data by the means of various software and hardware security features like code signing, authenticated boot chain, dedicated co-processor and integrated cryptographic engines with hardware fused keys. Despite these obstacles, adversaries have successfully broken through various software protections in the past, leaving only the hardware as the last standing barrier between the attacker and user data. In this work, we build upon existing software vulnerabilities and break through the final barrier by performing the first publicly reported physical Side-Channel Analysis (SCA) attack on an iPhone in order to extract the hardware-fused device-specific User Identifier (UID) key. This key – once at hand – allows the adversary to perform an offline brute-force attack on the user passcode employing an optimized and scalable implementation of the Key Derivation Function (KDF) on a Graphics Processing Unit (GPU) cluster. Once the passcode is revealed, the adversary has full access to all user data stored on the device and possibly in the cloud. As the software exploit enables acquisition and processing of hundreds of millions of traces, this work further shows that an attacker being able to query arbitrary many chosen-data encryption/decryption requests is a realistic model, even for compact systems with advanced software protections, and emphasizes the need for assessing resilience against SCA for a very high number of traces.
Expand

08 April 2021

Deevashwer Rathee, Mayank Rathee, Rahul Kranti Kiran Goli, Divya Gupta, Rahul Sharma, Nishanth Chandran, Aseem Rastogi
ePrint Report ePrint Report
Complex machine learning (ML) inference algorithms like recurrent neural networks (RNNs) use standard functions from math libraries like exponentiation, sigmoid, tanh, and reciprocal of square root. Although prior work on secure 2-party inference provides specialized protocols for convolutional neural networks (CNNs), existing secure implementations of these math operators rely on generic 2-party computation (2PC) protocols that suffer from high communication. We provide new specialized 2PC protocols for math functions that crucially rely on lookup-tables and mixed-bitwidths to address this performance overhead; our protocols for math functions communicate up to 423x less data than prior work. Some of the mixed bitwidth operations used by our math implementations are (zero and signed) extensions, different forms of truncations, multiplication of operands of mixed-bitwidths, and digit decomposition (a generalization of bit decomposition to larger digits). For each of these primitive operations, we construct specialized 2PC protocols that are more communication efficient than generic 2PC, and can be of independent interest. Furthermore, our math implementations are numerically precise, which ensures that the secure implementations preserve model accuracy of cleartext. We build on top of our novel protocols to build SIRNN, a library for end-to-end secure 2-party DNN inference, that provides the first secure implementations of an RNN operating on time series sensor data, an RNN operating on speech data, and a state-of-the-art ML architecture that combines CNNs and RNNs for identifying all heads present in images. Our evaluation shows that SIRNN achieves up to three orders of magnitude of performance improvement when compared to inference of these models using an existing state-of-the-art 2PC framework.
Expand
Gideon Samid
ePrint Report ePrint Report
In the current crypto paradigm a single secret key transforms a plaintext into a ciphertext and vice versa, or at most a different key is doing the reverse action. Attackers exposed to the ciphertext are hammering it to extract that single key and the plaintext. This paradigm may be challenged with an alternate setup: using a particular crypto algorithm, there is an infinite number of keys that are perfectly interchangeable -- each has the same effect. Nonetheless they are hard to find. And unlike regular cryptography, the best an attacker can hope for using this new "Family Key Cryptography”, is to identify the entire infinitely large family of keys, not the actual key that executed the cryptographic action. This very fact is a cornerstone for a host of applications, mostly still to be unfolded. E.g.: (1) Community Cryptography, where each member has a different key, but the community will encrypt and decrypt as if sharing the same key; (2) 'Forever Key Cryptography': crashing the Shannon's limit, the Forever Key strategy will allow a single finite key to last indefinitely. The shared secret key will be used to derive a succession of operating keys, which will be replaced before they are being compromised. Since any cryptanalysis of usage will end up with an infinite list of key candidates, there will be equal number of candidates for the shared "Forever Key", and thus there will be no erosion in the secrecy of the Forever Key regardless of its level of use. The very idea of infinite number of interchangeable keys is so fundamentally different, that most of its applications are still unknown.
Expand
Masayuki Abe, Miguel Ambrona, Andrej Bogdanov, Miyako Ohkubo, Alon Rosen
ePrint Report ePrint Report
Proofs of partial knowledge demonstrate the possession of certain subsets of witnesses for a given collection of statements $x_1,\dots,x_n$. Cramer, Damg{\aa}rd, and Schoenmakers (CDS), built proofs of partial knowledge, given ``atomic'' protocols for individual statements $x_i$, by having the prover randomly secret share the verifier's challenge and using the shares as challenges for the atomic protocols. This simple and highly-influential transformation has been used in numerous applications, ranging from anonymous credentials to ring signatures.

We consider what happens if, instead of using the shares directly as challenges, the prover first hashes them. We show that this elementary enhancement can result in significant benefits: \begin{itemize} \item the proof contains a {\em single} atomic transcript per statement $x_i$, \item it suffices that the atomic protocols are $\kappa$-special sound for $\kappa \geq 2$, \item when compiled to a signature scheme using the Fiat-Shamir heuristic, its unforgeability can be proved in the {\em non-programmable} random oracle model. \end{itemize} None of the above features is satisfied by the CDS transformation.
Expand
Animesh Chhotaray, Thomas Shrimpton
ePrint Report ePrint Report
Design-hiding techniques are a central piece of academic and industrial efforts to protect electronic circuits from being reverse-engineered. However, these techniques have lacked a principled foundation to guide their design and security evaluation, leading to a long line of broken schemes. In this paper, we begin to lay this missing foundation.

We establish formal syntax for design-hiding (DH) schemes, a cryptographic primitive that encompasses all known design-stage methods to hide the circuit that is handed to a (potentially adversarial) foundry for fabrication. We give two security notions for this primitive: function recovery (FR) and key recovery (KR). The former is the ostensible goal of design-hiding methods to prevent reverse-engineering the functionality of the circuit, but most prior work has focused on the latter. We then present the first provably (FR,KR)-secure DH scheme, $OneChaff_hd$. A side-benefit of our security proof is a framework for analyzing a broad class of new DH schemes. We finish by unpacking our main security result, to provide parameter-setting guidance.
Expand
Chao Sun, Thomas Espitau, Mehdi Tibouchi, Masayuki Abe
ePrint Report ePrint Report
In the past 30 years, lattice reduction has proved to be one powerful tool of public-key cryptanalysis. Since the advent of the Hidden Number Problem, there has been an extensive study on attacks on (EC)DSA with nonce leakage. While lattice attacks require only a few signatures, it can't deal with small nonce bias compared with Bleichenbacher attack. Prior to this work, it is unknown how to utilize more signatures to improve lattice attacks on (EC)DSA. In this paper, we propose several approaches to improve lattice attacks. The key idea is that we can guess some bits of the secret key(or the nonces) and modify the standard lattice to increase the volume, thus making the lattice attack much easier. Besides, we observe that by filtering some specific signatures we are able to modify the lattice, so we can collect a large number of signatures and construct a lattice that is much easier to attack. With a combination of these techniques, we are able to improve lattice attacks on (EC)DSA. On the one hand, we are able to attack 160-bit modulus(and other modulus as well) (EC)DSA with 2-bit leakage within $2^{15}$ BKZ-30 operations with 90 signatures. On the other hand, with $2^{27}$ signatures available, we are able to attack 160-bit (EC)DSA with 2-bit leakage in just one BKZ-30 operation.

As a second contribution, we give an explanation for several questions unexplained in previous works. It was observed that SVP approaches(Kannan embedding) always outperform CVP approaches(nearest plane) and lattice attack is very sensitive to the Kannan Embedding factor, but these questions are not discussed in previous works. We give an explanation for completeness.

Last, we carry out some experiments on the TPM-Fail dataset. While the original attack utilizes around 40000 signatures, with a combination of our method, we are able to recover the secret with only 800 signatures available.
Expand
Veronika Kuchta, Amin Sakzad, Damien Stehle, Ron Steinfeld, Shi-Feng Sun
ePrint Report ePrint Report
We introduce a new technique called `Measure-Rewind-Measure' (MRM) to achieve tighter security proofs in the quantum random oracle model (QROM). We first apply our MRM technique to derive a new security proof for a variant of the `double-sided' quantum One-Way to Hiding Lemma (O2H) of Bindel et al. [TCC 2019] which, for the first time, avoids the square-root advantage loss in the security proof. In particular, it bypasses a previous `impossibility result' of Jiang, Zhang and Ma [IACR eprint 2019]. We then apply our new O2H Lemma to give a new tighter security proof for the Fujisaki-Okamoto transform for constructing a strong (INDCCA) Key Encapsulation Mechanism (KEM) from a weak (INDCPA) public-key encryption scheme satisfying a mild injectivity assumption.
Expand
Yuncong Hu, Kian Hooshmand, Harika Kalidhindi, Seung Jin Yang, Raluca Ada Popa
ePrint Report ePrint Report
Transparency logs are designed to help users audit untrusted servers. For example, Certificate Transparency (CT) enables users to detect when a compromised Certificate Authority (CA) has issued a fake certificate. Practical state-of-the-art transparency log systems, however, suffer from high monitoring costs when used for low-latency applications. To reduce monitoring costs, such systems often require users to wait an hour or more for their updates to take effect, inhibiting low-latency applications. We propose $\text{Merkle}^2$, a transparency log system that supports both efficient monitoring and low-latency updates. To achieve this goal, we construct a new multi-dimensional, authenticated data structure that nests two types of Merkle trees, hence the name of our system, $\text{Merkle}^2$. Using this data structure, we then design a transparency log system with efficient monitoring and lookup protocols that enables low-latency updates. In particular, all the operations in $\text{Merkle}^2$ are independent of update intervals and are (poly)logarithmic to the number of entries in the log. $\text{Merkle}^2$ not only has excellent asymptotics when compared to prior work, but is also efficient in practice. Our evaluation shows that $\text{Merkle}^2$ propagates updates in as little as 1 second and can support 100× more users than state-of-the-art transparency logs.
Expand
Zezhou Hou, Jiongjiong Ren, Shaozhen Chen
ePrint Report ePrint Report
Cryptanalysis based on deep learning has become a hotspot in the international cryptography community since it was proposed. The key point of differential cryptanalysis based on deep learning is to find a neural differential distinguisher with longer rounds or higher probability. Therefore it is important to research how to improve the accuracy and the rounds of neural differential distinguisher. In this paper, we design SAT-based algorithms to find a good input difference so that the accuracy of the neural distinguisher can be improved as high as possible. As applications, we search and obtain the neural differential distinguishers of 9-round SIMON32/64, 10-round SIMON48/96 and 11-round SIMON64/128. For SIMON48/96, we choose $(0x0,0x100000)$ as the input difference and train 9-round and 10-round neural distinguishers of SIMON48/96. In addition, with the automatic search based on SAT, we extend the neural 9-round, 10-round distinguishers to 11-round, 12-round distinguishers by prepending the optimal 2-round differential transition $(0x400000,0x100001) \xrightarrow{2^{-4}}\left( 0x0,0x100000 \right)$. Based on the 11-round and 12-round neural distinguisher, we complete a 14-round key recovery attack of SIMON48/96. Our attack takes about 1550s to recover the final subkey. Its average time complexity is no more than $2^{22.21}$ 14-round encryption of SIMON48/96, and the data complexity is about $2^{12.8}$. Similar to 14-round key recovery attack, we perform 13-round key recovery attack for SIMON32/64 with input difference $(0x0,0x80)$ with a success rate of more than 90$\%$. It takes about 23s to complete an attack with the data complexity no more than $2^{12.5}$ and the time complexity no more than $2^{16.4}$. It is worth mentioning that the attacks are practical for 13-round SIMON32/64 and 14-round SIMON48/96.
Expand
Gang Wang
ePrint Report ePrint Report
Sharding technology is becoming a promising candidate to address the scalability issues in blockchain. The key concept behind sharding technology is to partition the network status into multiple distinct smaller committees, each of which handles a disjoint set of transactions to leverage its capability of parallel processing. However, when introducing sharding technology to blockchain, several key challenges need to be resolved, such as security and heterogeneity among the participating nodes. This paper introduces RepShard, a reputation-based blockchain sharding scheme that aims to achieve both linearly scaling efficiency and system security simultaneously. RepShard adopts a two-layer hierarchical chain structure, consisting of a reputation chain and independent transaction chains. Each transaction chain is maintained within its shard to record transactions, while the reputation chain is maintained by all shards to update the reputation score of each participating node. We leverage a novel reputation scheme to record each participating node's integrated and valid contribution to the system, in which we consider the heterogeneity of participating nodes (e.g., computational resources). The reputation score used in sharding and leader election processes maintains the balance and security of each shard. RepShard relies on verifiable relay transactions for cross-shard transactions to ensure consistency between distinct shards. By integrating reputation into the sharding protocol, our scheme can offer both scalability and security at the same time.
Expand
◄ Previous Next ►