IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
28 October 2022
Benoit Chevallier-Mames
ePrint Report
Goh and Jarecki (Eurocrypt 2003) showed how to get a signature scheme from the computational Diffie-Hellman assumption, and they introduced the name EDL for signatures of this type. The corresponding EDL family of signature schemes is remarkable for several reasons: elegance, simplicity and tight security. However, EDL security proofs stand in the random oracle model, and, to the best of our knowledge, extending this family without using an idealization of hash functions has never been successful.
In this paper, we propose a new signature scheme belonging to the EDL family, which is simple, natural and efficient, without using the random oracle model. Our scheme is based on the very same assumption than the Boneh-Boyen scheme, namely the strong Diffie-Hellman assumption, with the precision that our groups are not bound to being bilinear. We also make use of a correlation-intractable hash function, for a particular relation related to discrete-logarithm.
In addition to the theoretical interest of extending the EDL family with- out the random oracle model, our scheme is also one of the very few schemes which achieve discrete-log security properties without relying on pairings.
In this paper, we propose a new signature scheme belonging to the EDL family, which is simple, natural and efficient, without using the random oracle model. Our scheme is based on the very same assumption than the Boneh-Boyen scheme, namely the strong Diffie-Hellman assumption, with the precision that our groups are not bound to being bilinear. We also make use of a correlation-intractable hash function, for a particular relation related to discrete-logarithm.
In addition to the theoretical interest of extending the EDL family with- out the random oracle model, our scheme is also one of the very few schemes which achieve discrete-log security properties without relying on pairings.
Jesús-Javier Chi-Domínguez
ePrint Report
This paper centers on the SIDH proof of knowledge work by De Feo, Dobson, Galbraith, and Zobernig, which points out that the Castryck-Decru attack does not apply to their first 3-special soundness construction.
This work analyzes and explicitly describes an optimized recoverable Sigma protocol based on that 3-special soundness SIDH-PoK construction.
We also discuss the impact of moving to B-SIDH and G2SIDH setups in terms of sizes.
Due to the Castryck-Decru attack, we decided to write this paper relying on a theoretical analysis to list expected optimized signature sizes instead of updating eprint 2022/475. We point out that this work is a theoretical analysis extension of eprint 2022/475.
Due to the Castryck-Decru attack, we decided to write this paper relying on a theoretical analysis to list expected optimized signature sizes instead of updating eprint 2022/475. We point out that this work is a theoretical analysis extension of eprint 2022/475.
Miranda Christ, Joseph Bonneau
ePrint Report
Motivated by the goal of building a cryptocurrency with succinct global state, we introduce the abstract notion of a revocable proof system. We prove an information-theoretic result on the relation between global state size and the required number of local proof updates as statements are revoked (e.g., coins are spent). We apply our result to conclude that there is no useful trade-off point when building a stateless cryptocurrency: the system must either have a linear-sized global state (in the number of accounts in the system) or require a near-linear rate of local proof updates. The notion of a revocable proof system is quite general and also provides new lower bounds for set commitments, vector commitments and authenticated dictionaries.
Christian Picozzi, Alessio Meneghetti, Giovanni Tognolini
ePrint Report
We propose a novel post-quantum code-based digital signature algorithm whose security is based on the difficulty of decoding Quasi-Cyclic codes in systematic form, and whose trapdoor relies on the knowledge of a hidden Quasi-Cyclic Low-Density-Parity-Check (QC-LDPC) code. The utilization of Quasi-Cyclic (QC) codes allows us to balance between security and key size, while the LDPC property lighten the encoding complexity, thus the signing algorithm complexity, significantly.
Valence Cristiani, Maxime Lecomte, Philippe Maurine
ePrint Report
Unsupervised side-channel attacks allow extracting secret keys manipulated by cryptographic primitives through leakages of their physical implementations. As opposed to supervised attacks, they do not require a preliminary profiling of the target, constituting a broader threat since they imply weaker assumptions on the adversary model. Their downside is their requirement for some a priori knowledge on the leakage model of the device. On one hand, stochastic attacks such as the Linear Regression Analysis (LRA) allow for a flexible a priori, but are mostly limited to a univariate treatment of the traces. On the other hand, model-based attacks require an explicit formulation of the leakage model but have recently been extended to multidimensional versions allowing to benefit from the potential of Deep Learning (DL) techniques. The EVIL Machine Attack (EMA), introduced in this paper, aims at taking the best of both worlds. Inspired by generative adversarial networks, its architecture is able to recover a representation of the leakage model, which is then turned into a key distinguisher allowing flexible a priori. In addition, state-of-the-art DL techniques require 256 network trainings to conduct the attack. EMA requires only one, scaling down the time complexity of such attacks by a considerable factor. Simulations and real experiments show that EMA is applicable in cases where the adversary has very low knowledge on the leakage model, while significantly reducing the required number of traces compared to a classical LRA. Eventually, a generalization of EMA, able to deal with masked implementation is introduced.
Thomas Kaeding
ePrint Report
We demonstrate that with some ideas from group theory we are very often able to recover the keywords for a quagmire cipher from its key table. This would be the last task for a cryptologist in analyzing such a cipher.
Samuel Bouaziz--Ermann, Alex B. Grilo, Damien Vergnaud
ePrint Report
The subset cover problem for $k \geq 1$ hash functions, which can be seen as an extension of the collision problem, was introduced in 2002 by Reyzin and Reyzin to analyse the security of their hash-function based signature scheme HORS. The security of many hash-based signature schemes relies on this problem or a variant of this problem (e.g. HORS, SPHINCS, SPHINCS+, ...).
Recently, Yuan, Tibouchi and Abe (2022) introduced a variant to the subset cover problem, called restricted subset cover, and proposed a quantum algorithm for this problem. In this work, we prove that any quantum algorithm needs to make $\Omega\left(k^{-\frac{2^{k-1}}{2^k-1}}\cdot N^{\frac{2^{k-1}-1}{2^k-1}}\right)$ queries to the underlying hash functions to solve the restricted subset cover problem, which essentially matches the query complexity of the algorithm proposed by Yuan, Tibouchi and Abe.
We also analyze the security of the general $(r,k)$-subset cover problem, which is the underlying problem that implies the unforgeability of HORS under a $r$-chosen message attack (for $r \geq 1$). We prove that a generic quantum algorithm needs to make $\Omega\left(N^{k/5}\right)$ queries to the underlying hash functions to find a $(1,k)$-subset cover.
We also propose a quantum algorithm that finds a $(r,k)$-subset cover making $O\left(N^{k/(2+2r)}\right)$ queries to the $k$ hash functions.
We also propose a quantum algorithm that finds a $(r,k)$-subset cover making $O\left(N^{k/(2+2r)}\right)$ queries to the $k$ hash functions.
Minki Hhan, Jiseung Kim, Changmin Lee, Yongha Son
ePrint Report
A cryptographic primitive based on the Learning With Errors (LWE) problem with its variants is a promising candidate for the efficient quantum-resistant public key cryptosystem. The recent schemes use the LWE problem with a small-norm or sparse secret key for better efficiency. Such constraints, however, lead to more tailor-made attacks and thus are a trade-off between efficiency and security. Improving the algorithm for the LWE problem with the constraints thus has a significant consequence in the concrete security of schemes.
In this paper, we present a new hybrid attack on the LWE problem. This new attack combines the primal lattice attack and an improved MitM attack called Meet-LWE, answering an open problem posed by May [Crypto'21].
According to our estimation, the new hybrid attack performs better than the previous attacks for the LWE problems with a sparse ternary secret key, which plays the significant role for the efficiency of fully homomorphic encryption schemes.
In terms of the technical part, we generalize the Meet-LWE algorithm to be compatible with Babai's nearest plane algorithm. As a side contribution, we remove the error guessing step in Meet-LWE, resolving another open question.
In this paper, we present a new hybrid attack on the LWE problem. This new attack combines the primal lattice attack and an improved MitM attack called Meet-LWE, answering an open problem posed by May [Crypto'21].
According to our estimation, the new hybrid attack performs better than the previous attacks for the LWE problems with a sparse ternary secret key, which plays the significant role for the efficiency of fully homomorphic encryption schemes.
In terms of the technical part, we generalize the Meet-LWE algorithm to be compatible with Babai's nearest plane algorithm. As a side contribution, we remove the error guessing step in Meet-LWE, resolving another open question.
27 October 2022
Roberto Avanzi, Ionut Mihalcea, David Schall, Andreas Sandberg
ePrint Report
For both cloud and client applications, the protection of the confidentiality and integrity of remotely processed information is an increasingly common feature request. It is also a very challenging goal to achieve with reasonable costs in terms of memory overhead and performance penalty. In turn, this usually leads to security posture compromises in products.
In this paper we review the main technologies that have been proposed so far to address this problem, as well as some new techniques and combinations thereof. We systematise the treatment of protecting data in use by starting with models of the adversaries, thus allowing us to define different, yet consistent protection levels. We evaluate the storage and performance impacts and, as far as we are aware for the first time, we consider also the impact on performance when the measured benchmarks are the only running tasks or when they are just one task in an environment with heavy additional random traffic, thus simulating a cloud server under full load.
Using advanced techniques to compress counters can make it viable to store them on-chip -- for instance by adding on-chip RAM that can be as small as to 1/256-th of the off-chip RAM. This allows for implementations of memory protection providing full confidentiality, integrity and anti-replay protection with hitherto unattained penalties, especially in combination with the repurposing of ECC bits to store integrity tags. The performance penalty on a memory bus bandwidth saturated server can thus be contained under 1%.
In this paper we review the main technologies that have been proposed so far to address this problem, as well as some new techniques and combinations thereof. We systematise the treatment of protecting data in use by starting with models of the adversaries, thus allowing us to define different, yet consistent protection levels. We evaluate the storage and performance impacts and, as far as we are aware for the first time, we consider also the impact on performance when the measured benchmarks are the only running tasks or when they are just one task in an environment with heavy additional random traffic, thus simulating a cloud server under full load.
Using advanced techniques to compress counters can make it viable to store them on-chip -- for instance by adding on-chip RAM that can be as small as to 1/256-th of the off-chip RAM. This allows for implementations of memory protection providing full confidentiality, integrity and anti-replay protection with hitherto unattained penalties, especially in combination with the repurposing of ECC bits to store integrity tags. The performance penalty on a memory bus bandwidth saturated server can thus be contained under 1%.
Xiangyu Su, Xavier Défago, Mario Larangeira, Kazuyuki Mori, Takuya Oda, Yuta Okumura, Yasumasa Tamura, Keisuke Tanaka
ePrint Report
The demand for peer-to-peer (P2P) energy trading systems (ETS) grows alongside the development of house renewable energy generation. A P2P/ETS enables its peers to trade energy freely as in a double auction market. It requires a ledger to record peers' trading history. A typical approach is relying on a decentralized ledger, e.g., blockchain, with smart contract capabilities, unavoidably incurring high costs. Therefore, motivated to build a smart contract-free system, this work proposes a novel blockchain and consensus design utilizing the double auction characteristics of P2P/ETS. Concretely, we first revisit the blockchain data structure so that it can reflect auction bids. Next, we introduce a novel mining mechanism utilizing a bid-matching problem (BMP), which requires miners to find the best combination sets of sell/buy bids according to a given scoring function. Hence, the miner who mines the best-scored block can extend the blockchain. The fundamental difference between the BMP-based mining and traditional proof-of-X schemes, e.g., work or stake, is that our protocol selects blocks instead of miners. That is, a higher-scored block has better contents (bids and transactions), thus being preferable to a lower-scored block regardless of whether the miner is honest. Finally, we analyze miners' local chain dynamics and show a bound for the score distribution of the scoring function to prove that the protocol satisfies the key properties of consensus, i.e., persistence and liveness.
Daniel Lubarov, Jordi Baylina Melé
ePrint Report
We describe a nondeterministic method for bignum arithmetic. It is inspired by the "casting out nines" technique, where some identity is checked modulo 9, providing a probabilistic result.
More generally, we might check that some identity holds under a set of moduli, i.e. $f(\vec{x}) = 0 \mod m_i$ for each $m_i \in M$. Then $\DeclareMathOperator{\lcm}{lcm} f(\vec{x}) = 0 \mod \lcm(M)$, and if we know $|f(\vec{x})| < \lcm(M)$, it follows that $f(\vec{x}) = 0$.
We show how to perform such small-modulus checks efficiently, for certain $f(\vec{x})$ such as bignum multiplication. We focus on the cost model of zero-knowledge proof systems, which support field arithmetic and range checks as native operations.
More generally, we might check that some identity holds under a set of moduli, i.e. $f(\vec{x}) = 0 \mod m_i$ for each $m_i \in M$. Then $\DeclareMathOperator{\lcm}{lcm} f(\vec{x}) = 0 \mod \lcm(M)$, and if we know $|f(\vec{x})| < \lcm(M)$, it follows that $f(\vec{x}) = 0$.
We show how to perform such small-modulus checks efficiently, for certain $f(\vec{x})$ such as bignum multiplication. We focus on the cost model of zero-knowledge proof systems, which support field arithmetic and range checks as native operations.
Andrea Basso, Giulio Codogni, Deirdre Connolly, Luca De Feo, Tako Boris Fouotsa, Guido Maria Lido, Travis Morrison, Lorenz Panny, Sikhar Patranabis, Benjamin Wesolowski
ePrint Report
Generating a supersingular elliptic curve such that nobody knows its endomorphism ring is a notoriously hard task, despite several isogeny-based protocols relying on such an object. A trusted setup is often proposed as a workaround, but several aspects remain unclear. In this work, we develop the tools necessary to practically run such a distributed trusted-setup ceremony.
Our key contribution is the first statistically zero-knowledge proof of isogeny knowledge that is compatible with any base field. To prove statistical ZK, we introduce isogeny graphs with Borel level structure and prove they have the Ramanujan property. Then, we analyze the security of a distributed trusted-setup protocol based on our ZK proof in the simplified universal composability framework. Lastly, we develop an optimized implementation of the ZK proof, and we propose a strategy to concretely deploy the trusted-setup protocol.
Our key contribution is the first statistically zero-knowledge proof of isogeny knowledge that is compatible with any base field. To prove statistical ZK, we introduce isogeny graphs with Borel level structure and prove they have the Ramanujan property. Then, we analyze the security of a distributed trusted-setup protocol based on our ZK proof in the simplified universal composability framework. Lastly, we develop an optimized implementation of the ZK proof, and we propose a strategy to concretely deploy the trusted-setup protocol.
Hao Guo, Sayandeep Saha, Satwik Patnaik, Vasudev Gohil, Debdeep Mukhopadhyay, Jeyavijayan (JV) Rajendran
ePrint Report
A fault attack (FA) is one of the most potent threats to cryptographic applications. Implementing a FA-protected block cipher requires knowledge of the exploitable fault space of the underlying crypto algorithm. The discovery of exploitable faults is a challenging problem that demands human expertise and time. Current practice is to rely on certain predefined fault models. However, the applicability of such fault models varies among ciphers. Prior work discovers such exploitable fault models individually for each cipher at the expanse of a large amount of human effort. Our work completely replaces human effort by using reinforcement learning (RL) over the huge fault space of a block cipher to discover the effective fault models automatically. Validation on an AES block cipher demonstrates that our approach can automatically discover the effective fault models within a few hours, outperforming prior work, which requires days of manual analysis. The proposed approach also reveals vulnerabilities in the existing FA-protected block ciphers and initiates an end-to-end vulnerability assessment flow.
26 October 2022
Emanuele Bellini, David Gerault, Anna Hambitzer, Matteo Rossi
ePrint Report
Neural cryptanalysis is the study of cryptographic primitives through machine learning techniques. We review recent results in neural cryptanalysis, and identify the obstacles to its application to new, different primitives. As a response, we provide a generic tool for neural cryptanalysis, composed of two parts. The first part is an evolutionary algorithm for the search of single-key and related-key input differences that works well with neural distinguishers; this algorithm fixes scaling issues with Gohr's initial approach and enables the search for larger ciphers, while removing the dependency on machine learning, to focus on cryptanalytic methods. The second part is DBitNet, a neural distinguisher architecture agnostic to the structure of the cipher. We show that DBitNet
outperforms state-of-the-art architectures on a range of instances. Using our tool, we improve on the state-of-the-art neural distinguishers for SPECK64, SPECK128, SIMON64, SIMON128 and GIMLI-PERMUTATION and provide new neural distinguishers for HIGHT, LEA, TEA, XTEA and PRESENT.
Cyril Bouvier, Guilhem Castagnos, Laurent Imbert, Fabien Laguillaumie
ePrint Report
We introduce BICYCL an Open Source C++ library that implements arithmetic in the ideal class groups of imaginary quadratic fields, together with a set of cryptographic primitives based on class groups. It is available at https://gite.lirmm.fr/crypto/bicycl under GNU General Public License version 3 or any later version. BICYCL provides significant speed-ups on the implementation of the arithmetic of class groups. Concerning cryptographic applications, BICYCL is orders of magnitude faster than any previous pilot implementation of the CL linearly encryption scheme, making it faster than Paillier’s encryption scheme at any security level. Linearly homomorphic encryption is the core of many multi-party computation protocols, sometimes involving a huge number of encryptions and homomorphic evaluations: class group-based protocols become the best solution in terms of bandwidth and computational efficiency to rely upon.
Erik-Oliver Blass, Florian Kerschbaum
ePrint Report
We introduce and investigate the privacy-preserving version of collaborative data cleaning. With collaborative data cleaning, two parties want to reconcile their data sets to filter out badly classified, misclassified data items. In the privacy-preserving (private) version of data cleaning, the additional security goal is that parties should only learn their misclassified data items, but nothing else about the other party's data set. The problem of private data cleaning is essentially a variation of private set intersection (PSI), and one could employ recent circuit-PSI techniques to compute misclassifications with privacy. However, we design, analyze, and implement three new protocols tailored to the specifics of private data cleaning that significantly outperform a circuit-PSI-based approach. With the first protocol, we exploit the idea that a small additional leakage (the size of the intersection of data items) allows for runtime and communication improvements of more than one order of magnitude over circuit-PSI. The other two protocols convert the problem of finding a mismatch in data classifications into finding a match, and then follow the standard technique of using oblivious pseudo-random functions (OPRF) for computing PSI. Depending on the number of data classes, this leads to either total runtime or communication improvements of up to two orders of magnitude over circuit-PSI.
Emanuele Bellini, Jorge Chavez-Saab, Jesús-Javier Chi-Domínguez, Andre Esser, Sorina Ionica, Luis Rivera-Zamarripa, Francisco Rodríguez-Henríquez, Monika Trimoska, Floyd Zweydinger
ePrint Report
The security guarantees of most isogeny-based protocols rely on the computational hardness of finding an isogeny between two supersingular isogenous curves defined over a prime field $\mathbb{F}_q$ with $q$ a power of a large prime $p$. In most scenarios, the isogeny is known to be of degree $\ell^e$ for some small prime $\ell$. We call this problem the Supersingular Fixed-Degree Isogeny Path (SIPFD) problem.
It is believed that the most general version of SIPFD is not solvable faster than in exponential time by classical as well as quantum attackers.
In a classical setting, a meet-in-the-middle algorithm is the fastest known strategy for solving the SIPFD. However, due to its stringent memory requirements, it quickly becomes infeasible for moderately large SIPFD instances. In a practical setting, one has therefore to resort to time-memory trade-offs to instantiate attacks on the SIPFD. This is particularly true for GPU platforms, which are inherently more memory-constrained than CPU architectures. In such a setting, a van Oorschot-Wiener-based collision finding algorithm offers a better asymptotic scaling. Finding the best algorithmic choice for solving instances under a fixed prime size, memory budget and computational platform remains so far an open problem.
To answer this question, we present a precise estimation of the costs of both strategies considering most recent algorithmic improvements. As a second main contribution, we substantiate our estimations via optimized software implementations of both algorithms. In this context, we provide the first optimized GPU implementation of the van Oorschot-Wiener approach for solving the SIPFD. Based on practical measurements we extrapolate the running times for solving different-sized instances. Finally, we give estimates of the costs of computing a degree-$2^{88}$ isogeny using our CUDA software library running on an NVIDIA A100 GPU server.
In a classical setting, a meet-in-the-middle algorithm is the fastest known strategy for solving the SIPFD. However, due to its stringent memory requirements, it quickly becomes infeasible for moderately large SIPFD instances. In a practical setting, one has therefore to resort to time-memory trade-offs to instantiate attacks on the SIPFD. This is particularly true for GPU platforms, which are inherently more memory-constrained than CPU architectures. In such a setting, a van Oorschot-Wiener-based collision finding algorithm offers a better asymptotic scaling. Finding the best algorithmic choice for solving instances under a fixed prime size, memory budget and computational platform remains so far an open problem.
To answer this question, we present a precise estimation of the costs of both strategies considering most recent algorithmic improvements. As a second main contribution, we substantiate our estimations via optimized software implementations of both algorithms. In this context, we provide the first optimized GPU implementation of the van Oorschot-Wiener approach for solving the SIPFD. Based on practical measurements we extrapolate the running times for solving different-sized instances. Finally, we give estimates of the costs of computing a degree-$2^{88}$ isogeny using our CUDA software library running on an NVIDIA A100 GPU server.
Ian McQuoid, Mike Rosulek, Jiayu Xu
ePrint Report
We introduce the idea of input obfuscation for secure two-party computation ($\textsf{io2PC}$). Suppose Alice holds a private value $x$ and wants to allow clients to learn $f(x,y_i)$, for their choice of $y_i$, via a secure computation protocol. The goal of $\textsf{io2PC}$ is for Alice to encode $x$ so that an adversary who compromises her storage gets only oracle access to the function $f(x,\cdot)$. At the same time, there must be a 2PC protocol for computing $f(x,y)$ that takes only this encoding (and not the plaintext $x$) as input. We show how to achieve $\textsf{io2PC}$ for functions that have virtual black-box (VBB) obfuscation in either the random oracle model or generic group model. For functions that can be VBB-obfuscated in the random oracle model, we provide an $\textsf{io2PC}$ protocol by replacing the random oracle with an oblivious PRF. For functions that can be VBB-obfuscated in the generic group model, we show how Alice can instantiate a "personalized" generic group. A personalized generic group is one where only Alice can perform the algebraic operations of the group, but where she can let others perform operations in that group via an oblivious interactive protocol.
Rasheed Kibria, M. Sazadur Rahman, Farimah Farahmandi, Mark Tehranipoor
ePrint Report
At the early stage of the design process, many security vulnerability assessment solutions require fast and precise extraction of the finite state machines (FSMs) present in the register-transfer level (RTL) description of the design. FSMs should be accurately extracted for watermark insertion, fault injection assessment of control paths in a system-on-chip (SoC), information leakage assessment, control-flow reverse engineering in RTL abstraction, logic obfuscation, etc. However, it is quite unfortunate that, as of today, existing state-of-the-art synthesis tools cannot provide accurate and reliable extraction of all FSMs from the provided high-level RTL code. Precise identification of all FSM state registers and the pure combinational state transition logic described in the RTL code with numerous registers and other combinational logic makes it quite challenging to develop such a solution. In this paper, we propose a framework named RTL-FSMx to extract FSMs from high-level RTL codes written in Verilog HDL. RTL-FSMx utilizes node-based analysis on the abstract syntax tree (AST) representation of the RTL code to isolate FSM state registers from other registers. RTL-FSMx automatically extracts state transition graphs (STGs) for each of the detected FSM state registers and additional information of the extracted FSMs. Experimental results on a large number of benchmark circuits demonstrate that RTL-FSMx accurately recovers all control FSMs from RTL codes with various complexity and size within just a few seconds.
University of Virginia
Job Posting
One PhD position is available. Candidates with a background in cryptography in general, and multi-party computation specifically, are encouraged to apply.
The University of Virginia is among the top 25 best universities and the top 3 public universities in the US (according to USNews).
Full financial support (including full tuition coverage and a stipend of ~$2500 after-tax) will be guaranteed. The living cost is very reasonable (~$500-$1000 depending on the floor plan, e.g., if you live in a 3-bedroom apartment, ~$500 would suffice).
Closing date for applications:
Contact: Tianhao Wang