International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

02 March 2022

Shingo Sato, Junji Shikata
ePrint Report ePrint Report
An aggregate signature (ASIG) scheme allows any user to compress multiple signatures into a short signature called an aggregate signature. While a conventional ASIG scheme cannot detect any invalid messages from an aggregate signature, an ASIG scheme with detecting functionality (D-ASIG) has an additional property which can identify invalid messages from aggregate signatures. Hence, D-ASIG is useful to reduce the total amount of signature-sizes on a channel. On the other hand, development of quantum computers has been advanced recently. However, all existing D-ASIG schemes are insecure against attacks using quantum algorithms, which we call quantum attacks. In this paper, we propose a D-ASIG scheme with quantum-security which means security in a quantum setting. Hence, we first introduce quantum-security notions of ASIGs and D-ASIGs because there is no research on such security notions for (D-)ASIGs. Second, we propose a lattice-based aggregate one-time signature scheme with detecting functionality, and prove that this scheme satisfies our quantum-security in the quantum random oracle model and the certified key model. Hence, this scheme is the first quantum-secure D-ASIG.
Expand
Alexander May, Julian Nowakowski, Santanu Sarkar
ePrint Report ePrint Report
We address Partial Key Exposure attacks on CRT-RSA on secret exponents $d_p, d_q$ with small public exponent $e$. For constant $e$ it is known that the knowledge of half of the bits of one of $d_p, d_q$ suffices to factor the RSA modulus $N$ by Coppersmith's famous {\em factoring with a hint} result. We extend this setting to non-constant $e$. Somewhat surprisingly, our attack shows that RSA with $e$ of size $N^{\frac 1 {12}}$ is most vulnerable to Partial Key Exposure, since in this case only a third of the bits of both $d_p, d_q$ suffices to factor $N$ in polynomial time, knowing either most significant bits (MSB) or least significant bits (LSB).

Let $ed_p = 1 + k(p-1)$ and $ed_q = 1 + \ell(q-1)$. On the technical side, we find the factorization of $N$ in a novel two-step approach. In a first step we recover $k$ and $\ell$ in polynomial time, in the MSB case completely elementary and in the LSB case using Coppersmith's lattice-based method. We then obtain the prime factorization of $N$ by computing the root of a univariate polynomial modulo $kp$ for our known $k$. This can be seen as an extension of Howgrave-Graham's {\em approximate divisor} algorithm to the case of {\em approximate divisor multiples} for some known multiple $k$ of an unknown divisor $p$ of $N$. The point of {\em approximate divisor multiples} is that the unknown that is recoverable in polynomial time grows linearly with the size of the multiple $k$.

Our resulting Partial Key Exposure attack with known MSBs is completely rigorous, whereas in the LSB case we rely on a standard Coppersmith-type heuristic. We experimentally verify our heuristic, thereby showing that in practice we reach our asymptotic bounds already using small lattice dimensions. Thus, our attack is highly efficient.
Expand
Jelle Don, Serge Fehr, Christian Majenz, Christian Schaffner
ePrint Report ePrint Report
Commit-and-open Sigma-protocols are a popular class of protocols for constructing non-interactive zero-knowledge arguments and digital-signature schemes via the Fiat-Shamir transformation. Instantiated with hash-based commitments, the resulting non-interactive schemes enjoy tight online-extractability in the random oracle model. Online extractability improves the tightness of security proofs for the resulting digital-signature schemes by avoiding lossy rewinding or forking-lemma based extraction. In this work, we prove tight online extractability in the quantum random oracle model (QROM), showing that the construction supports post-quantum security. First, we consider the default case where committing is done by element-wise hashing. In a second part, we extend our result to Merkle-tree based commitments. Our results yield a significant improvement of the provable post-quantum security of the digital-signature scheme Picnic. Our analysis makes use of a recent framework by Chung et al. for analysing quantum algorithms in the QROM using purely classical reasoning. Therefore, our results can to a large extent be understood and verified without prior knowledge of quantum information science.
Expand
Maxime Bombar, Alain Couvreur, Thomas Debris-Alazard
ePrint Report ePrint Report
It is a long standing open problem to find search to decision reductions for structured versions of the decoding problem of linear codes. Such results in the lattice-based setting have been carried out using number fields: Polynomial–LWE, Ring–LWE, Module–LWE and so on. We propose a function field version of the LWE problem. This new framework leads to another point of view on structured codes, e.g. quasi-cyclic codes, strengthening the connection between lattice-based and code-based cryptography. In particular, we obtain the first search to decision reduction for structured codes. Following the historical constructions in lattice–based cryptography, we instantiate our construction with function fields analogues of cyclotomic fields, namely Carlitz extensions, leading to search to decision reductions on various versions of Ring-LPN, which have applications to secure multi party computation and to an authentication protocol.
Expand
Mihir Bellare, Viet Tung Hoang
ePrint Report ePrint Report
This paper provides efficient authenticated-encryption (AE) schemes in which a ciphertext is a commitment to the key. These are extended, at minimal additional cost, to schemes where the ciphertext is a commitment to all encryption inputs, meaning key, nonce, associated data and message. Our primary schemes are modifications of GCM (for basic, unique-nonce AE security) and AES-GCM-SIV (for misuse-resistant AE security) and add both forms of commitment without any increase in ciphertext size. We also give more generic, but somewhat more costly, solutions.
Expand
Gang Tang, Dung Hoang Duong, Antoine Joux, Thomas Plantard, Youming Qiao, Willy Susilo
ePrint Report ePrint Report
In this paper, we propose a practical signature scheme based on the alternating trilinear form equivalence problem. Our scheme is inspired by the Goldreich-Micali-Wigderson's zero-knowledge protocol for graph isomorphism, and can be served as an alternative candidate for the NIST's post-quantum digital signatures. First, we present theoretical evidences to support its security, especially in the post-quantum cryptography context. The evidences are drawn from several research lines, including hidden subgroup problems, multivariate cryptography, cryptography based on group actions, the quantum random oracle model, and recent advances on isomorphism problems for algebraic structures in algorithms and complexity. Second, we demonstrate its potential for practical uses. Based on algorithm studies, we propose concrete parameter choices, and then implement a prototype. One concrete scheme achieves 128 bit security with public key size ~ 4100 bytes, signature size ~ 6800 bytes, and running times (key generation, sign, verify) ~ 0.8ms on a common laptop computer.
Expand
Likang Lu, Jianzhu Lu
ePrint Report ePrint Report
Distributed secret sharing techniques, where a specific secret is encoded into its shares which are conveyed to the IoT device or its user via storage nodes, are considered. A verifiably distributed secret sharing (VDSS) provides a way for a legitimate user to verify the secret he reconstructs through the downloaded shares while the secrecy condition is satisfied in a weak or a perfect sense. This article examines the impact of minimizing verification information in a VDSS on the communication complexity and storage overhead, and achieves the verifiability in resource-limited IoTs by aggregating the verification information of different devices/users. Then, two secure VDSS are proposed. The first VDSS attains the lower bound on the communication complexity while providing the fault tolerance. The second VDSS simultaneously achieves the lower bounds of both communication complexity and storage overhead while providing the balanced storage load, thus showing the scheme that is optimal in terms of both parameters.
Expand
Georg Fuchsbauer, Michele Orrù
ePrint Report ePrint Report
Mimblewimble is a cryptocurrency protocol promising to overcome notorious blockchain scalability issues. To this day, one of the major factors hindering its wider adoption is the lack of non-interactive transactions, that is, payments where only the sender needs to be online. We analyze and fix a proposal by Yu, which, inspired by stealth addresses, introduces non-interactive transactions to Mimblewimble.
Expand
Ittai Abraham, Gilad Asharov
ePrint Report ePrint Report
We revisit Gradecast (Feldman and Micali, STOC’88) in Synchrony and Reliable Broadcast (Bracha, Information and Computation’87) in Asynchrony. For both tasks, we provide new protocols that have three desirable properties: (1) optimal resilience, tolerating $t < n/3$ malicious parties; (2) are communication-efficient, where honest parties send just $O(nL)$ bits for a dealer with a message of $L = \Omega(n)$ bits; (3) and are unconditionally secure, (or statistically secure), without needing to rely on any computational or setup assumptions. To the best of our knowledge, no previous work obtains all three properties simultaneously.
Expand
Vidal Attias, Luigi Vigneri, Vassil Dimitrov
ePrint Report ePrint Report
The importance of efficient multi-exponen- tiation algorithms in a large spectrum of cryptographic applications continues to grow. Previous literature on the subject pays attention exclusively on the mini- mization of the number of modular multiplications. However, a small reduction of the multiplicative com- plexity can be easily overshadowed by other figures of merit. In this article, we demonstrate that the most efficient algorithm for computing multi-exponentiation changes if considering execution time instead of number of multi-exponentiations. We focus our work on two al- gorithms that perform best under the number of multi- exponentiation metric and show that some side opera- tions affects their theoretical ranking. We provide this analysis on different hardware, such as Intel Core and ARM CPUs and the two latest generations of Rasp- berry Pis, to show how the machine chosen affects the execution time of multi-exponentiation.
Expand
Pratyush Agarwal, Varun Narayanan, Shreya Pathak, Manoj Prabhakaran, Vinod M. Prabhakaran, Mohammad Ali Rehan
ePrint Report ePrint Report
Correlated pairs of random variables are a central concept in information-theoretically secure cryptography. Secure reductions between different correlations have been studied, and completeness results are known. Further, the complexity of such reductions is intimately connected with circuit complexity and efficiency of locally decodable codes. As such, making progress on these complexity questions faces strong barriers. Motivated by this, in this work, we study a restricted form of secure reductions --- namely, Secure Non-Interactive Reductions (SNIR) --- which is still closely related to the original problem, and establish several fundamental results and relevant techniques for it.

We uncover striking connections between SNIR and linear algebraic properties of correlations. Specifically, we define the spectrum of a correlation, and show that a target correlation has a SNIR to a source correlation only if the spectrum of the latter contains the entire spectrum of the former. We also establish a ``mirroring lemma'' that shows an unexpected symmetry between the two parties in a SNIR, when viewed through the lens of spectral analysis. We also use cryptographic insights and elementary linear algebraic analysis to fully characterize the role of common randomness as well as local randomness in SNIRs. We employ these results to resolve several fundamental questions about SNIRs, and to define future directions.
Expand
Elette Boyle, Niv Gilboa, Yuval Ishai, Ariel Nof
ePrint Report ePrint Report
We consider the efficiency of protocols for secure multiparty computation (MPC) with a dishonest majority. A popular approach for the design of such protocols is to employ preprocessing. Before the inputs are known, the parties generate correlated secret randomness, which is consumed by a fast and possibly ``information-theoretic'' online protocol.

A powerful technique for securing such protocols against malicious parties uses homomorphic MACs to authenticate the values produced by the online protocol. Compared to a baseline protocol, which is only secure against semi-honest parties, this involves a significant increase in the size of the correlated randomness, by a factor of up to a statistical security parameter. Different approaches for partially mitigating this extra storage cost come at the expense of increasing the online communication.

In this work we propose a new technique for protecting MPC with preprocessing against malicious parties. We show that for circuit evaluation protocols that satisfy mild security and structural requirements, that are met by many standard protocols with semi-honest security, the extra additive storage and online communication costs are both logarithmic in the circuit size. This applies to Boolean circuits and to arithmetic circuits over fields or rings, and to both information-theoretic and computationally secure protocols. Our protocol can be viewed as a sublinear information-theoretic variant of the celebrated ``GMW compiler'' that applies to natural protocols for MPC with preprocessing.

Our compiler makes a novel use of the techniques of Boneh et al. (Crypto 2019) for sublinear distributed zero knowledge, which were previously only used in the setting of honest-majority MPC.
Expand
Hadi Mardani Kamali, Kimia Zamiri Azar, Farimah Farahmandi, Mark Tehranipoor
ePrint Report ePrint Report
Logic locking is a design concealment mechanism for protecting the IPs integrated into modern System-on-Chip (SoC) architectures from a wide range of hardware security threats at the IC manufacturing supply chain. Logic locking primarily helps the designer to protect the IPs against reverse engineering, IP piracy, overproduction, and unauthorized activation. For more than a decade, the research studies that carried out on this paradigm has been immense, in which the applicability, feasibility, and efficacy of the logic locking have been investigated, including metrics to assess the efficacy, impact of locking in different levels of abstraction, threat model definition, resiliency against physical attacks, tampering, and the application of machine learning. However, the security and strength of existing logic locking techniques have been constantly questioned by sophisticated logical and physical attacks that evolve in sophistication at the same rate as logic locking countermeasure approaches. By providing a comprehensive definition regarding the metrics, assumptions, and principles of logic locking, in this survey paper, we categorize the existing defenses and attacks to capture the most benefit from the logic locking techniques for IP protection, and illuminating the need for and giving direction to future research studies in this topic. This survey paper serves as a guide to quickly navigate and identify the state-of-the-art that should be considered and investigated for further studies on logic locking techniques, helping IP vendors, SoC designers, and researchers to be informed of the principles, fundamentals, and properties of logic locking.
Expand
Andre Esser, Alexander May, Javier Verbel, Weiqiang Wen
ePrint Report ePrint Report
In a so-called partial key exposure attack one obtains some information about the secret key, e.g. via some side-channel leakage. This information might be a certain fraction of the secret key bits (erasure model) or some erroneous version of the secret key (error model). The goal is to recover the secret key from the leaked information.

There is a common belief that, as opposed to e.g. the RSA cryptosystem, most post-quantum cryptosystems are usually resistant against partial key exposure attacks. We strongly question this belief by constructing partial key exposure attacks on code-based, multivariate, and lattice-based schemes (BIKE, Rainbow and NTRU). Our attacks exploit the redundancy that modern PQ cryptosystems inherently use for efficiency reasons. The application and development of techniques from information set decoding plays a crucial role for achieving our results.

On the theoretical side, we show non-trivial information leakage bounds that allow for a polynomial time key recovery attack. As an example, for all schemes the knowledge of a constant fraction of the secret key bits suffices to reconstruct the full key in polynomial time.

Even if we no longer insist on polynomial time attacks, most of our attacks extend well and remain feasible up to large erasure and error rates. In the case of BIKE for example we obtain attack complexities around 60 bits when half of the secret key bits are erased, or a quarter of the secret key bits are faulty.

Our results show that even highly error-prone key leakage of modern PQ cryptosystems may lead to full secret key recoveries.
Expand
Hasan Al Shaikh, Mohammad Bin Monjil, Shigang Chen, Farimah Farahmandi, Navid Asadizanjani, Mark Tehranipoor, Fahim Rahman
ePrint Report ePrint Report
The expansive globalization of the semiconductor supply chain has introduced numerous untrusted entities into different stages of a device’s lifecycle, enabling them to compromise its security. To make matters worse, the increasing complexity in the design as well as aggressive time-to-market requirements of the newer generation of integrated circuits can lead either designers to unintentionally introduce security vulnerabilities or verification engineers to fail in detecting them earlier in design lifecycle, often due to the limitation of traditional verification and testing methodologies. These overlooked or undetected vulnerabilities can be exploited by malicious entities in subsequent stages of the lifecycle through an ever-widening variety of hardware attacks. The ability to ascertain the provenance of these vulnerabilities, after they have been unearthed at a later stage, becomes a pressing issue when the security assurance across the whole lifecycle is required to be ensured and generationally improved to thwart emerging attacks.

We posit that if there is a malicious or unintentional breach of security policies of a device, it will be reflected in the form of anomalies in the data collected through traditional design, verification, validation, and testing activities throughout the lifecycle. With that, a digital simulacrum of a device’s lifecycle, called a digital twin (DT), can be formed by the data gathered from different stages to secure the lifecycle of the device. The DT can analyze the collected data through its constituent AI and data analytics algorithms to trace the origin of a detected hardware attack or vulnerability to the associated stage of the lifecycle. We refer to this functionality of the DT as Backward Trust Analysis.

We also introduce the notion of Forward Trust Analysis which refers to the scalability and adaptability of the DT to unforeseen threats as they emerge. In this paper, we put forward a realization of intertwined relationships of security vulnerabilities with data available from the silicon lifecycle and formulate different components of an AI driven DT framework. The proposed DT framework leverages these relationships to achieve aforementioned security objectives through causality analysis, and thus accomplish end-to-end security-aware management of the entire semiconductor lifecycle. We put a perspective on how the limitations of existing ad-hoc-style security solutions can be overcome by the data oriented analysis that underpins our approach. With several threat and attack scenarios, we demonstrate how advanced modeling techniques can perform relational learning to identify such attacks. Finally, we provide potential future research avenues and challenges for realization of the digital twin framework to enable secure semiconductor lifecycle management
Expand
Ran Cohen, Jack Doerner, Yashvanth Kondi, abhi shelat
ePrint Report ePrint Report
We introduce a notion of round-robin secure sampling that captures several protocols in the literature, such as the "powers-of-tau" setup protocol for pairing-based polynomial commitments and zk-SNARKs, and certain verifiable mixnets.

Due to their round-robin structure, protocols of this class inherently require $n$ sequential broadcast rounds, where $n$ is the number of participants.

We describe how to compile them generically into protocols that require only $O(\sqrt{n})$ broadcast rounds. Our compiled protocols guarantee output delivery against any dishonest majority. This stands in contrast to prior techniques, which require $\Omega(n)$ sequential broadcasts in most cases (and sometimes many more). Our compiled protocols permit a certain amount of adversarial bias in the output, as all sampling protocols with guaranteed output must, due to Cleve's impossibility result (STOC'86). We show that in the context of the aforementioned applications, this bias is harmless.
Expand
Ueli Maurer, Christopher Portmann, Guilherme Rito
ePrint Report ePrint Report
This paper introduces a new type of public-key encryption scheme, called Multi-Designated Receiver Signed Public Key Encryption (MDRS-PKE), which allows a sender to select a set of designated receivers and both encrypt and sign a message that only these receivers will be able to read and authenticate (confidentiality and authenticity). An MDRS-PKE scheme provides several additional security properties which allow for a fundamentally new type of communication not considered before. Namely, it satisfies consistency---a dishonest sender cannot make different receivers receive different messages---off-the-record---a dishonest receiver cannot convince a third party of what message was sent (e.g., by selling their secret key), because dishonest receivers have the ability to forge signatures---and anonymity---parties that are not in the set of designated receivers cannot identify who the sender and designated receivers are.

We give a construction of an MDRS-PKE scheme from standard assumptions. At the core of our construction lies yet another new type of public-key encryption scheme, which is of independent interest: Public Key Encryption for Broadcast (PKEBC) which provides all the security guarantees of MDRS-PKE schemes, except authenticity.

We note that MDRS-PKE schemes give strictly more guarantees than Multi-Designated Verifier Signatures (MDVS) schemes with privacy of identities. This in particular means that our MDRS-PKE construction yields the first MDVS scheme with privacy of identities from standard assumptions. The only prior construction of such schemes was based on Verifiable Functional Encryption for general circuits (Damg\aa rd et al., TCC '20).
Expand
Diana Ghinea, Vipul Goyal, Chen-Da Liu-Zhang
ePrint Report ePrint Report
Byzantine agreement is a fundamental primitive in cryptography and distributed computing, and minimizing its round complexity is of paramount importance. It is long known that any randomized $r$-round protocol must fail with probability at least $(c\cdot r)^{-r}$, for some constant $c$, when the number of corruptions is linear in the number of parties, $t = \theta(n)$. On the other hand, current protocols fail with probability at least $2^{-r}$. Whether we can match the lower bound agreement probability remains unknown.

In this work, we resolve this long-standing open question. We present a protocol that matches the lower bound up to constant factors. Our results hold under a (strongly rushing) adaptive adversary that can corrupt up to $t = (1-\epsilon)n/2$ parties, and our protocols use a public-key infrastructure and a trusted setup for unique threshold signatures. This is the first protocol that decreases the failure probability (overall) by a 'super-constant' factor per round.
Expand
Charles Momin, Gaëtan Cassiers, François-Xavier Standaert
ePrint Report ePrint Report
We describe FPGA implementations of the Spook candidate to the NIST lightweight cryptography competition in two flavors. First, unprotected implementations that exhibit the excellent throughput and energy consumption for the area target specified by the NIST benchmarking initiative. Second, protected implementations leveraging the leveled implementation concept that the Spook design enables and confirming the significant performance gains that it enables.
Expand
Kostas Papagiannopoulos, Ognjen Glamocanin, Melissa Azouaoui, Dorian Ros, Francesco Regazzoni, Mirjana Stojilovic
ePrint Report ePrint Report
Side-channel attacks exploit a physical observable originating from a cryptographic device in order to extract its secrets. Many practically relevant advances in the field of side-channel analysis relate to security evaluations of cryptographic functions and devices. Accordingly, many metrics have been adopted or defined to express and quantify side-channel security. These metrics can relate to one another, but also conflict in terms of effectiveness, assumptions and security goals. In this work, we review the most commonly used metrics in the field of side-channel analysis. We provide a self-contained presentation of each metric, along with a discussion of its limitations. We practically demonstrate the metrics on examples of relevant implementations of the Advanced Encryption Standard (AES), and make the software implementation of the presented metrics available to the community as open source. This work, being beyond a survey of the current status of metrics, will allow researchers and practitioners to produce a well-informed security evaluation through a better understanding of its supporting and summarizing metrics.
Expand
◄ Previous Next ►