IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
13 April 2021
Darmstadt, Germany, 8 October 2021
Event Calendar
Event date: 8 October 2021
Submission deadline: 20 July 2021
Notification: 30 August 2021
Submission deadline: 20 July 2021
Notification: 30 August 2021
12 April 2021
Cesar Pereida García, Sampo Sovio
ePrint Report
Ed25519 has significant performance benefits compared to ECDSA using Weierstrass curves such as NIST P-256, therefore it is considered a good digital signature algorithm, specially for low performance IoT devices. However, such devices often have very limited resources and thus, implementations for these devices need to be as small and as performant as possible while being secure. In this paper we describe a scenario in which an obvious strategy to aggressively optimize an Ed25519 implementation for code size leads to a small memory footprint that is functionally correct but vulnerable to side-channel attacks. This strategy serves as an example of aggressive optimizations that might be considered by cryptography engineers, developers, and practitioners unfamiliar with the power of Side-Channel Analysis (SCA). As a solution to the flawed implementation example, we use a computer-aided cryptography tool generating formally verified finite field arithmetic to generate two secure Ed25519 implementations fulfilling different size requirements. After benchmarking and comparing these implementations to other widely used implementations our results show that computer-aided cryptography is capable of generating competitive code in terms of security, speed, and size.
Benny Applebaum, Oded Nir
ePrint Report
A secret-sharing scheme allows to distribute a secret $s$ among $n$ parties such that only some predefined ``authorized'' sets of parties can reconstruct the secret, and all other ``unauthorized'' sets learn nothing about $s$.
The collection of authorized/unauthorized sets can be captured by a monotone function $f:\{0,1\}^n\rightarrow \{0,1\}$.
In this paper, we focus on monotone functions that all their min-terms are sets of size $a$, and on their duals -- monotone functions whose max-terms are of size $b$. We refer to these classes as $(a,n)$-upslices and $(b,n)$-downslices, and note that these natural families correspond to monotone $a$-regular DNFs and monotone $(n-b)$-regular CNFs. We derive the following results.
1. (General downslices) Every downslice can be realized with total share size of $1.5^{n+o(n)}<2^{0.585 n}$. Since every monotone function can be cheaply decomposed into $n$ downslices, we obtain a similar result for general access structures improving the previously known $2^{0.637n+o(n)}$ complexity of Applebaum, Beimel, Nir and Peter (STOC 2020). We also achieve a minor improvement in the exponent of linear secrets sharing schemes.
2. (Random mixture of upslices) Following Beimel and Farras (TCC 2020) who studied the complexity of random DNFs with constant-size terms, we consider the following general distribution $F$ over monotone DNFs: For each width value $a\in [n]$, uniformly sample $k_a$ monotone terms of size $a$, where $k=(k_1,\ldots,k_n)$ is an arbitrary vector of non-negative integers. We show that, except with exponentially small probability, $F$ can be realized with share size of $2^{0.5 n+o(n)}$ and can be linearly realized with an exponent strictly smaller than $2/3$. Our proof also provides a candidate distribution for ``exponentially-hard'' access structure. We use our results to explore connections between several seemingly unrelated questions about the complexity of secret-sharing schemes such as worst-case vs. average-case, linear vs. non-linear and primal vs. dual access structures. We prove that, in at least one of these settings, there is a significant gap in secret-sharing complexity.
1. (General downslices) Every downslice can be realized with total share size of $1.5^{n+o(n)}<2^{0.585 n}$. Since every monotone function can be cheaply decomposed into $n$ downslices, we obtain a similar result for general access structures improving the previously known $2^{0.637n+o(n)}$ complexity of Applebaum, Beimel, Nir and Peter (STOC 2020). We also achieve a minor improvement in the exponent of linear secrets sharing schemes.
2. (Random mixture of upslices) Following Beimel and Farras (TCC 2020) who studied the complexity of random DNFs with constant-size terms, we consider the following general distribution $F$ over monotone DNFs: For each width value $a\in [n]$, uniformly sample $k_a$ monotone terms of size $a$, where $k=(k_1,\ldots,k_n)$ is an arbitrary vector of non-negative integers. We show that, except with exponentially small probability, $F$ can be realized with share size of $2^{0.5 n+o(n)}$ and can be linearly realized with an exponent strictly smaller than $2/3$. Our proof also provides a candidate distribution for ``exponentially-hard'' access structure. We use our results to explore connections between several seemingly unrelated questions about the complexity of secret-sharing schemes such as worst-case vs. average-case, linear vs. non-linear and primal vs. dual access structures. We prove that, in at least one of these settings, there is a significant gap in secret-sharing complexity.
Danilo Gligoroski
ePrint Report
The algebraic structures that are non-commutative and non-associative known as entropic groupoids that satisfy the "Palintropic" property i.e., $x^{\mathbf{A} \mathbf{B}} = (x^{\mathbf{A}})^{\mathbf{B}} = (x^{\mathbf{B}})^{\mathbf{A}} = x^{\mathbf{B} \mathbf{A}}$ were proposed by Etherington in '40s from the 20th century. Those relations are exactly the Diffie-Hellman key exchange protocol relations used with groups. The arithmetic for non-associative power indices known as Logarithmetic was also proposed by Etherington and later developed by others in the 50s-70s. However, as far as we know, no one has ever proposed a succinct notation for exponentially large non-associative power indices that will have the property of fast exponentiation similarly as the fast exponentiation is achieved with ordinary arithmetic via the consecutive rising to the powers of two.
In this paper, we define ringoid algebraic structures $(G, \boxplus, *)$ where $(G, \boxplus) $ is an Abelian group and $(G, *)$ is a non-commutative and non-associative groupoid with an entropic and palintropic subgroupoid which is a quasigroup, and we name those structures as Entropoids. We further define succinct notation for non-associative bracketing patterns and propose algorithms for fast exponentiation with those patterns.
Next, by an analogy with the developed cryptographic theory of discrete logarithm problems, we define several hard problems in Entropoid based cryptography, such as Discrete Entropoid Logarithm Problem (DELP), Computational Entropoid Diffie-Hellman problem (CEDHP), and Decisional Entropoid Diffie-Hellman Problem (DEDHP). We post a conjecture that DEDHP is hard in Sylow $q$-subquasigroups. Next, we instantiate an entropoid Diffie-Hellman key exchange protocol. Due to the non-commutativity and non-associativity, the entropoid based cryptographic primitives are supposed to be resistant to quantum algorithms. At the same time, due to the proposed succinct notation for the power indices, the communication overhead in the entropoid based Diffie-Hellman key exchange is very low: for 128 bits of security, 64 bytes in total are communicated in both directions, and for 256 bits of security, 128 bytes in total are communicated in both directions.
Our final contribution is in proposing two entropoid based digital signature schemes. The schemes are constructed with the Fiat-Shamir transformation of an identification scheme which security relies on a new hardness assumption: computing roots in finite entropoids is hard. If this assumption withstands the time's test, the first proposed signature scheme has excellent properties: for the classical security levels between 128 and 256 bits, the public and private key sizes are between 32 and 64, and the signature sizes are between 64 and 128 bytes. The second signature scheme reduces the finding of the roots in finite entropoids to computing discrete entropoid logarithms. In our opinion, this is a safer but more conservative design, and it pays the price in doubling the key sizes and the signature sizes.
We give a proof-of-concept implementation in SageMath 9.2 for all proposed algorithms and schemes in an appendix.
In this paper, we define ringoid algebraic structures $(G, \boxplus, *)$ where $(G, \boxplus) $ is an Abelian group and $(G, *)$ is a non-commutative and non-associative groupoid with an entropic and palintropic subgroupoid which is a quasigroup, and we name those structures as Entropoids. We further define succinct notation for non-associative bracketing patterns and propose algorithms for fast exponentiation with those patterns.
Next, by an analogy with the developed cryptographic theory of discrete logarithm problems, we define several hard problems in Entropoid based cryptography, such as Discrete Entropoid Logarithm Problem (DELP), Computational Entropoid Diffie-Hellman problem (CEDHP), and Decisional Entropoid Diffie-Hellman Problem (DEDHP). We post a conjecture that DEDHP is hard in Sylow $q$-subquasigroups. Next, we instantiate an entropoid Diffie-Hellman key exchange protocol. Due to the non-commutativity and non-associativity, the entropoid based cryptographic primitives are supposed to be resistant to quantum algorithms. At the same time, due to the proposed succinct notation for the power indices, the communication overhead in the entropoid based Diffie-Hellman key exchange is very low: for 128 bits of security, 64 bytes in total are communicated in both directions, and for 256 bits of security, 128 bytes in total are communicated in both directions.
Our final contribution is in proposing two entropoid based digital signature schemes. The schemes are constructed with the Fiat-Shamir transformation of an identification scheme which security relies on a new hardness assumption: computing roots in finite entropoids is hard. If this assumption withstands the time's test, the first proposed signature scheme has excellent properties: for the classical security levels between 128 and 256 bits, the public and private key sizes are between 32 and 64, and the signature sizes are between 64 and 128 bytes. The second signature scheme reduces the finding of the roots in finite entropoids to computing discrete entropoid logarithms. In our opinion, this is a safer but more conservative design, and it pays the price in doubling the key sizes and the signature sizes.
We give a proof-of-concept implementation in SageMath 9.2 for all proposed algorithms and schemes in an appendix.
Coşku Acay, Rolph Recto, Joshua Gancher, Andrew C. Myers, Elaine Shi
ePrint Report
Modern distributed systems involve interactions between principals with limited trust, so cryptographic mechanisms are needed to protect confidentiality and integrity. At the same time, most developers lack the training to securely employ cryptography. We present Viaduct, a compiler that transforms high-level programs into secure, efficient distributed realizations. Viaduct's source language allows developers to declaratively specify security policies by annotating their programs with information flow labels. The compiler uses these labels to synthesize distributed programs that use cryptography efficiently while still defending the source-level security policy. The Viaduct approach is general, and can be easily extended with new security mechanisms.
Our implementation of the Viaduct compiler comes with an extensible runtime system that includes plug-in support for multiparty computation, commitments, and zero-knowledge proofs. We have evaluated the system on a set of benchmarks, and the results indicate that our approach is feasible and can use cryptography in efficient, nontrivial ways.
Our implementation of the Viaduct compiler comes with an extensible runtime system that includes plug-in support for multiparty computation, commitments, and zero-knowledge proofs. We have evaluated the system on a set of benchmarks, and the results indicate that our approach is feasible and can use cryptography in efficient, nontrivial ways.
Chris Brzuska, Antoine Delignat-Lavaud, Christoph Egger, Cédric Fournet, Konrad Kohbrok, Markulf Kohlweiss
ePrint Report
We analyze the security of the TLS 1.3 key establishment protocol, as specified at the end of its rigorous standardization process. We define a core key-schedule and reduce its security to concrete assumptions against an adversary that controls client and server configurations and adaptively chooses some of their keys. Our model supports all key derivations featured in the standard, including its negotiated modes and algorithms that combine an optional Diffie-Hellman exchange for forward
secrecy with optional pre-shared keys supplied by the application or recursively established in prior sessions. We show that the output keys are secure as soon as any of their input key materials are. Our compositional, code-based proof makes use of state separation to yield concrete reductions despite the complexity of the key schedule. We also discuss (late) changes to the standard that would improve its robustness and simplify its analysis.
Michele Fabbrini
ePrint Report
In this paper I propose a new key agreement scheme applying a well-known property of powers to a particular couple of elements of the cyclic group generated by a primitive root of a prime p. The model, whose security relies on the difficulty of computing discrete logarithms when p is a safe prime, consists of a five-step process providing explicit key authentication.
Daniel Brown, Neal Koblitz, Jason LeGrow
ePrint Report
In a recent eprint, Rahman and Shpilrain proposed a Diffie-Hellman style key exchange based on a semidirect product of $n × n$-matrices over a finite field. We show that, using public information, an adversary can recover the agreed upon secret key by solving a system of $n^2$ linear equations.
Gregor Haas, Seetal Potluri, Aydin Aysu
ePrint Report
This paper proposes the first cache timing side-channel attacks on one of Apple's mobile devices. Utilizing a recent, permanent exploit named checkm8, we reverse-engineered Apple's BootROM and created a powerful toolkit for running arbitrary hardware security experiments on Apple's in-house designed ARM systems-on-a-chip (SoC). We integrate two additional open-source tools to enhance our own toolkit, further increasing its capability for hardware security research. Using this toolkit, which is a core contribution of our work, we then implement both time-driven and access-driven cache timing attacks as proof-of-concept illustrators. In both cases, we propose statistical innovations which further the state-of-the-art in cache timing attacks. We find that our access-driven attack, at best, can reduce the security of OpenSSL AES-128 to merely 25 bits, while our time-driven attack (with a much weaker adversary) can reduce it to 48 bits. We also quantify that access-driven attacks on the A10 which do not use our statistical improvements are unable to deduce the key, and that our statistical technique reduces the traces needed by the typical time-driven attacks by 21.62 million.
Andreas Wiemers, Johannes Mittmann
ePrint Report
Recent publications consider side-channel attacks against the key schedule
of the Data Encryption Standard (DES). These publications identify a leakage model
depending on the XOR of register values in the DES key schedule.
Building on this leakage model, we first revisit a discrete model which assumes that
the Hamming distances between subsequent round keys leak without error. We
analyze this model formally and provide theoretical explanations for observations
made in previous works.
Next we examine a continuous model which considers more points of interest and
also takes noise into account. The model gives rise to an evaluation function for key
candidates and an associated notion of key ranking. We develop an algorithm for
enumerating key candidates up to a desired rank which is based on the FinckePohst
lattice point enumeration algorithm. We derive information-theoretic bounds and
estimates for the remaining entropy and compare them with our experimental results.
We apply our attack to side-channel measurements of a security controler. Using
our enumeration algorithm we are able to significantly improve the results reported
previously for the same measurement data.
James Howe, Thomas Prest, Daniel Apon
ePrint Report
Post-quantum cryptography has known a Cambrian explosion in the last decade. What started as a very theoretical and mathematical area has now evolved into a sprawling research field, complete with side-channel resistant embedded implementations, large scale deployment tests and standardization efforts. This study systematizes the current state of knowledge on post-quantum cryptography. Compared to existing studies, we adopt a transversal point of view and center our study around three areas: (i) paradigms, (ii) implementation, (iii) deployment. Our point of view allows to cast almost all classical and post-quantum schemes into just a few paradigms. We highlight trends, common methodologies, and pitfalls to look for and recurrent challenges.
Aein Rezaei Shahmirzadi, Amir Moradi
ePrint Report
Masking schemes are among the most popular countermeasures against Side-Channel Analysis (SCA) attacks.
Realization of masked implementations on hardware faces several difficulties including dealing with glitches. Threshold Implementation (TI) is known as the first strategy with provable security in presence of glitches. In addition to the desired security order d, TI defines the minimum number of shares to also depend on the algebraic degree of the target function.
This may lead to unaffordable implementation costs for higher orders. For example, at least five shares are required to protect the smallest nonlinear function against second-order attacks. By cutting such a dependency, the successor schemes are able to achieve the same security level by just $d+1$ shares, at the cost of high demand for fresh randomness, particularly at higher orders.
In this work, we provide a methodology to realize the second-order glitch-extended probing-secure implementation of a group of quadratic functions with three shares and no fresh randomness. This allows us to construct second-order secure implementations of several cryptographic primitives with very limited number of fresh masks, including Keccak, SKINNY, Midori, PRESENT, and PRINCE.
09 April 2021
Oleksiy Lisovets, David Knichel, Thorben Moos, Amir Moradi
ePrint Report
In recent years, smartphones have become an increasingly important
storage facility for personal sensitive data ranging from photos and credentials up
to financial and medical records like credit cards and persons diseases. Trivially,
it is critical to secure this information and only provide access to the genuine and
authenticated user. Smartphone vendors have already taken exceptional care to
protect user data by the means of various software and hardware security features
like code signing, authenticated boot chain, dedicated co-processor and integrated
cryptographic engines with hardware fused keys. Despite these obstacles, adversaries
have successfully broken through various software protections in the past, leaving
only the hardware as the last standing barrier between the attacker and user data.
In this work, we build upon existing software vulnerabilities and break through
the final barrier by performing the first publicly reported physical Side-Channel
Analysis (SCA) attack on an iPhone in order to extract the hardware-fused device-specific
User Identifier (UID) key. This key once at hand allows the adversary to
perform an offline brute-force attack on the user passcode employing an optimized
and scalable implementation of the Key Derivation Function (KDF) on a Graphics
Processing Unit (GPU) cluster. Once the passcode is revealed, the adversary has full
access to all user data stored on the device and possibly in the cloud.
As the software exploit enables acquisition and processing of hundreds of millions of
traces, this work further shows that an attacker being able to query arbitrary many
chosen-data encryption/decryption requests is a realistic model, even for compact
systems with advanced software protections, and emphasizes the need for assessing
resilience against SCA for a very high number of traces.
08 April 2021
Deevashwer Rathee, Mayank Rathee, Rahul Kranti Kiran Goli, Divya Gupta, Rahul Sharma, Nishanth Chandran, Aseem Rastogi
ePrint Report
Complex machine learning (ML) inference algorithms like recurrent neural networks (RNNs) use standard functions from math libraries like exponentiation, sigmoid, tanh, and reciprocal of square root.
Although prior work on secure 2-party inference provides specialized protocols for convolutional neural networks (CNNs), existing secure implementations of these math operators rely on generic 2-party computation (2PC) protocols that suffer from high communication. We provide new specialized 2PC protocols for math functions that crucially rely on lookup-tables and mixed-bitwidths to address this performance overhead; our protocols for math functions communicate up to 423x less data than prior work. Some of the mixed bitwidth operations used by our math implementations are (zero and signed) extensions, different forms of truncations, multiplication of operands of mixed-bitwidths, and digit decomposition (a generalization of bit decomposition to larger digits). For each of these primitive operations, we construct specialized 2PC protocols that are more communication efficient than generic 2PC, and can be of independent interest.
Furthermore, our math implementations are numerically precise, which ensures that the secure implementations preserve model accuracy of cleartext. We build on top of our novel protocols to build SIRNN, a library for end-to-end secure 2-party DNN inference, that provides the first secure implementations of an RNN operating on time series sensor data, an RNN operating on speech data, and a state-of-the-art ML architecture that combines CNNs and RNNs for identifying all heads present in images. Our evaluation shows that SIRNN achieves up to three orders of magnitude of performance improvement when compared to inference of these models using an existing state-of-the-art 2PC framework.
Gideon Samid
ePrint Report
In the current crypto paradigm a single secret key transforms a plaintext into a ciphertext and vice versa, or at most a different key is doing the reverse action. Attackers exposed to the ciphertext are hammering it to extract that single key and the plaintext. This paradigm may be challenged with an alternate setup: using a particular crypto algorithm, there is an infinite number of keys that are perfectly interchangeable -- each has the same effect. Nonetheless they are hard to find. And unlike regular cryptography, the best an attacker can hope for using this new "Family Key Cryptography, is to identify the entire infinitely large family of keys, not the actual key that executed the cryptographic action. This very fact is a cornerstone for a host of applications, mostly still to be unfolded. E.g.: (1) Community Cryptography, where each member has a different key, but the community will encrypt and decrypt as if sharing the same key; (2) 'Forever Key Cryptography': crashing the Shannon's limit, the Forever Key strategy will allow a single finite key to last indefinitely. The shared secret key will be used to derive a succession of operating keys, which will be replaced before they are being compromised. Since any cryptanalysis of usage will end up with an infinite list of key candidates, there will be equal number of candidates for the shared "Forever Key", and thus there will be no erosion in the secrecy of the Forever Key regardless of its level of use. The very idea of infinite number of interchangeable keys is so fundamentally different, that most of its applications are still unknown.
Masayuki Abe, Miguel Ambrona, Andrej Bogdanov, Miyako Ohkubo, Alon Rosen
ePrint Report
Proofs of partial knowledge demonstrate the possession of certain subsets of witnesses for a given collection of statements $x_1,\dots,x_n$.
Cramer, Damg{\aa}rd, and Schoenmakers (CDS), built proofs of partial knowledge, given ``atomic'' protocols for individual statements $x_i$, by having the prover randomly secret share the verifier's challenge and using the shares as challenges for the atomic protocols. This simple and highly-influential transformation has been used in numerous applications, ranging from anonymous credentials to ring signatures.
We consider what happens if, instead of using the shares directly as challenges, the prover first hashes them. We show that this elementary enhancement can result in significant benefits: \begin{itemize} \item the proof contains a {\em single} atomic transcript per statement $x_i$, \item it suffices that the atomic protocols are $\kappa$-special sound for $\kappa \geq 2$, \item when compiled to a signature scheme using the Fiat-Shamir heuristic, its unforgeability can be proved in the {\em non-programmable} random oracle model. \end{itemize} None of the above features is satisfied by the CDS transformation.
We consider what happens if, instead of using the shares directly as challenges, the prover first hashes them. We show that this elementary enhancement can result in significant benefits: \begin{itemize} \item the proof contains a {\em single} atomic transcript per statement $x_i$, \item it suffices that the atomic protocols are $\kappa$-special sound for $\kappa \geq 2$, \item when compiled to a signature scheme using the Fiat-Shamir heuristic, its unforgeability can be proved in the {\em non-programmable} random oracle model. \end{itemize} None of the above features is satisfied by the CDS transformation.
Animesh Chhotaray, Thomas Shrimpton
ePrint Report
Design-hiding techniques are a central piece of academic and industrial efforts to protect electronic circuits from being reverse-engineered. However, these techniques have lacked a principled foundation to guide their design and security evaluation, leading to a long line of broken schemes. In this paper, we begin to lay this missing foundation.
We establish formal syntax for design-hiding (DH) schemes, a cryptographic primitive that encompasses all known design-stage methods to hide the circuit that is handed to a (potentially adversarial) foundry for fabrication. We give two security notions for this primitive: function recovery (FR) and key recovery (KR). The former is the ostensible goal of design-hiding methods to prevent reverse-engineering the functionality of the circuit, but most prior work has focused on the latter. We then present the first provably (FR,KR)-secure DH scheme, $OneChaff_hd$. A side-benefit of our security proof is a framework for analyzing a broad class of new DH schemes. We finish by unpacking our main security result, to provide parameter-setting guidance.
We establish formal syntax for design-hiding (DH) schemes, a cryptographic primitive that encompasses all known design-stage methods to hide the circuit that is handed to a (potentially adversarial) foundry for fabrication. We give two security notions for this primitive: function recovery (FR) and key recovery (KR). The former is the ostensible goal of design-hiding methods to prevent reverse-engineering the functionality of the circuit, but most prior work has focused on the latter. We then present the first provably (FR,KR)-secure DH scheme, $OneChaff_hd$. A side-benefit of our security proof is a framework for analyzing a broad class of new DH schemes. We finish by unpacking our main security result, to provide parameter-setting guidance.
Chao Sun, Thomas Espitau, Mehdi Tibouchi, Masayuki Abe
ePrint Report
In the past 30 years, lattice reduction has proved to be one powerful tool of public-key cryptanalysis.
Since the advent of the Hidden Number Problem, there has been an extensive study on attacks on (EC)DSA with nonce leakage.
While lattice attacks require only a few signatures, it can't deal with small nonce bias compared with Bleichenbacher attack. Prior to this work, it is unknown how to utilize more signatures to improve lattice attacks on (EC)DSA.
In this paper, we propose several approaches to improve lattice attacks. The key idea is that we can guess some bits of the secret key(or the nonces) and modify the standard lattice to increase the volume, thus making the lattice attack much easier. Besides, we observe that by filtering some specific signatures we are able to modify the lattice, so we can collect a large number of signatures and construct a lattice that is much easier to attack. With a combination of these techniques, we are able to improve lattice attacks on (EC)DSA. On the one hand, we are able to attack 160-bit modulus(and other modulus as well) (EC)DSA with 2-bit leakage within $2^{15}$ BKZ-30 operations with 90 signatures. On the other hand, with $2^{27}$ signatures available, we are able to attack 160-bit (EC)DSA with 2-bit leakage in just one BKZ-30 operation.
As a second contribution, we give an explanation for several questions unexplained in previous works. It was observed that SVP approaches(Kannan embedding) always outperform CVP approaches(nearest plane) and lattice attack is very sensitive to the Kannan Embedding factor, but these questions are not discussed in previous works. We give an explanation for completeness.
Last, we carry out some experiments on the TPM-Fail dataset. While the original attack utilizes around 40000 signatures, with a combination of our method, we are able to recover the secret with only 800 signatures available.
As a second contribution, we give an explanation for several questions unexplained in previous works. It was observed that SVP approaches(Kannan embedding) always outperform CVP approaches(nearest plane) and lattice attack is very sensitive to the Kannan Embedding factor, but these questions are not discussed in previous works. We give an explanation for completeness.
Last, we carry out some experiments on the TPM-Fail dataset. While the original attack utilizes around 40000 signatures, with a combination of our method, we are able to recover the secret with only 800 signatures available.
Veronika Kuchta, Amin Sakzad, Damien Stehle, Ron Steinfeld, Shi-Feng Sun
ePrint Report
We introduce a new technique called `Measure-Rewind-Measure' (MRM) to achieve tighter security proofs in the quantum random oracle model (QROM). We first apply our MRM technique to derive a new security proof for a variant of the `double-sided' quantum One-Way to Hiding Lemma (O2H) of Bindel et al. [TCC 2019] which, for the first time, avoids the square-root advantage loss in the security proof. In particular, it bypasses a previous `impossibility result' of Jiang, Zhang and Ma [IACR eprint 2019]. We then apply our new O2H Lemma to give a new tighter security proof for the Fujisaki-Okamoto transform for constructing a strong (INDCCA) Key Encapsulation Mechanism (KEM) from a weak (INDCPA) public-key encryption scheme satisfying a mild injectivity assumption.
Yuncong Hu, Kian Hooshmand, Harika Kalidhindi, Seung Jin Yang, Raluca Ada Popa
ePrint Report
Transparency logs are designed to help users audit untrusted servers. For example, Certificate Transparency (CT) enables users to detect when a compromised Certificate Authority (CA) has issued a fake certificate. Practical state-of-the-art transparency log systems, however, suffer from high monitoring costs when used for low-latency applications. To reduce monitoring costs, such systems often require users to wait an hour or more for their updates to take effect, inhibiting low-latency applications. We propose $\text{Merkle}^2$, a transparency log system that supports both efficient monitoring and low-latency updates. To achieve this goal, we construct a new multi-dimensional, authenticated data structure that nests two types of Merkle trees, hence the name of our system, $\text{Merkle}^2$. Using this data structure, we then design a transparency log system with efficient monitoring and lookup protocols that enables low-latency updates. In particular, all the operations in $\text{Merkle}^2$ are independent of update intervals and are (poly)logarithmic to the number of entries in the log. $\text{Merkle}^2$ not only has excellent asymptotics when compared to prior work, but is also efficient in practice. Our evaluation shows that $\text{Merkle}^2$ propagates updates in as little as 1 second and can support 100× more users than state-of-the-art transparency logs.