International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

18 September 2023

Théophile Wallez, Jonathan Protzenko, Karthikeyan Bhargavan
ePrint Report ePrint Report
Data formats used for cryptographic inputs have historically been the source of many attacks on cryptographic protocols, but their security guarantees remain poorly studied. One reason is that, due to their low-level nature, formats often fall outside of the security model. Another reason is that studying all of the uses of all of the formats within one protocol is too difficult to do by hand, and requires a comprehensive, automated framework.

We propose a new framework, “Comparse”, that specifically tackles the security analysis of data formats in cryptographic protocols. Comparse forces the protocol analyst to systematically think about data formats, formalize them precisely, and show that they enjoy strong enough properties to guarantee the security of the protocol.

Our methodology is developed in three steps. First, we introduce a high-level cryptographic API that lifts the traditional game-based cryptographic assumptions over bitstrings to work over high-level messages, using formats. This allows us to derive the conditions that secure formats must obey in order for their usage to be secure. Second, equipped with these security criteria, we implement a framework for specifying and verifying secure formats in the F* proof assistant. Our approach is based on format combinators, which enable compositional and modular proofs. In many cases, we relieve the user of having to write those combinators by hand, using compile-time term synthesis via Meta-F*. Finally, we show that our F* implementation can replace the symbolic notion of message formats previously implemented in the DY* protocol analysis framework. Our newer, bit-level precise accounting of formats closes the modeling gap, and allows DY* to reason about concrete messages and identify protocol flaws that it was previously oblivious to.

We evaluate Comparse over several classic and real-world protocols. Our largest case studies use Comparse to formalize and provide security proofs for the formats used in TLS 1.3, as well as upcoming protocols like MLS and Compact TLS 1.3 (cTLS), providing confidence and feedback in the design of these protocols.
Expand
Dario Fiore, Dimitris Kolonelos, Paola de Perthuis
ePrint Report ePrint Report
Registration-Based Encryption (RBE) [Garg et al. TCC'18] is a public-key encryption mechanism in which users generate their own public and secret keys, and register their public keys with a central authority called the key curator. Similarly to Identity-Based Encryption (IBE), in RBE users can encrypt by only knowing the public parameters and the public identity of the recipient. Unlike IBE, though, RBE does not suffer the key escrow problem — one of the main obstacles of IBE's adoption in practice — since the key curator holds no secret.

In this work, we put forward a new methodology to construct RBE schemes that support large users identities (i.e., arbitrary strings). Our main result is the first efficient pairing-based RBE for large identities. Prior to our work, the most efficient RBE is that of [Glaeser et al. ePrint'22] which only supports small identities. The only known RBE schemes with large identities are realized either through expensive non-black-box techniques (ciphertexts of 3.6 TB for 1000 users), or via a specialized lattice-based construction [Döttling et al. Eurocrypt'23] (ciphertexts of 2.4 GB). By unlocking the use of pairings for RBE with large identity space, we enable a further improvement of three orders of magnitude, as our ciphertexts for a system with 1000 users are $1.7$ MB.

The core technique of our approach is a novel use of cuckoo hashing in cryptography that can be of independent interest. We give two main applications. The first one is the aforementioned RBE methodology, where we use cuckoo hashing to compile an RBE with small identities into one for large identities. The second one is a way to convert any vector commitment scheme into a key-value map commitment. For instance, this leads to the first algebraic pairing-based key-value map commitments.
Expand
Min Zhang, Yu Chen, Chuanzhou Yao, Zhichao Wang
ePrint Report ePrint Report
Sigma protocols are one of the most common and efficient zero-knowledge proofs (ZKPs). Over the decades, a large number of efficient Sigma protocols are proposed, yet few works pay attention to the common design principal. In this work, we propose a generic framework of Sigma protocols for algebraic statements from verifiable secret sharing (VSS) schemes. Our framework provides a general and unified approach to understanding Sigma protocols for proving knowledge of openings of algebraic commitments. It not only neatly explains the classic protocols such as Schnorr, Guillou–Quisquater and Okamoto protocols, but also leads to new Sigma protocols that were not previously known.

Furthermore, we show an application of our framework in designing ZKPs for composite statements, which contain both algebraic and non-algebraic statements. We give a generic construction of ZKPs for composite statements by combining Sigma protocols from VSS and ZKPs following MPC-in-the-head paradigm seamlessly via a technique of witness sharing reusing. Our construction has advantages of requiring no trusted setup, being public-coin and having a fast prover runtime. By instantiating our construction using Ligero++ (Bhadauria et al., CCS 2020), we obtain a new ZK protocol for composite statements, which achieves a new balance between running time and the proof size, thus resolving the open problem left by Backes et al. (PKC 2019). Concretely, the proof size is polylogarithmic to the circuit size and the number of public-key operations that both the prover and the verifier require is independent to the circuit size.
Expand
Yongcheng Song, Jiang Zhang, Xinyi Huang, Wei Wu
ePrint Report ePrint Report
In this paper, we initiate the study of the Rank Decoding (RD) problem and LRPC codes with blockwise structures in rank-based cryptosystems. First, we introduce the blockwise errors ($\ell$-errors) where each error consists of $\ell$ blocks of coordinates with disjoint supports, and define the blockwise RD ($\ell$-RD) problem as a natural generalization of the RD problem whose solutions are $\ell$-errors (note that the standard RD problem is actually a special $\ell$-RD problem with $\ell=1$). We adapt the typical attacks on the RD problem to the $\ell$-RD problem, and find that the blockwise structures do not ease the problem too much: the $\ell$-RD problem is still exponentially hard for appropriate choices of $\ell>1$. Second, we introduce blockwise LRPC ($\ell$-LRPC) codes as generalizations of the standard LPRC codes whose parity-check matrices can be divided into $\ell$ sub-matrices with disjoint supports, i.e., the intersection of two subspaces generated by the entries of any two sub-matrices is a null space, and investigate the decoding algorithms for $\ell$-errors. We find that the gain of using $\ell$-errors in decoding capacity outweighs the complexity loss in solving the $\ell$-RD problem, which makes it possible to design more efficient rank-based cryptosystems with flexible choices of parameters.

As an application, we show that the two rank-based cryptosystems submitted to the NIST PQC competition, namely, RQC and ROLLO, can be greatly improved by using the ideal variants of the $\ell$-RD problem and $\ell$-LRPC codes. Concretely, for 128-bit security, our RQC has total public key and ciphertext sizes of 2.5 KB, which is not only about 50% more compact than the original RQC, but also smaller than the NIST Round 4 code-based submissions HQC, BIKE, and Classic McEliece.
Expand
Shichen Wu, Zhiying Song, Puwen Wei, Peng Tang, Quan Yuan
ePrint Report ePrint Report
The proof of stake (PoS) mechanism, which allows stakeholders to issue a block with a probability proportional to their wealth instead of computational power, is believed to be an energy-efficient alternative to the proof of work (PoW). The privacy concern of PoS, however, is more subtle than that of PoW. Recent research has shown that current anonymous PoS (APoS) protocols do not suffice to protect the stakeholder's identity and stake, and the loss of privacy is theoretically inherent for any (deterministic) PoS protocol that provides liveness guarantees. In this paper, we consider the concrete stake privacy of PoS when considering the limitations of attacks in practice. To quantify the concrete stake privacy of PoS, we introduce the notion of $(T, \delta, \epsilon)$-privacy. Our analysis of $(T, \delta, \epsilon)$-privacy on Cardano shows to what extent the stake privacy can be broken in practice, which also implies possible parameters setting of rational $(T, \delta, \epsilon)$-privacy for PoS in the real world. The data analysis of Cardano demonstrates that the $(T, \delta, \epsilon)$-privacy of current APoS is not satisfactory, mainly due to the deterministic leader election predicate in current PoS constructions. Inspired by the differential privacy technique, we propose an efficient non-deterministic leader election predicate, which can be used as a plugin to APoS protocols to protect stakes against frequency analysis. Based on our leader election predicate, we construct anonymous PoS with noise (APoS-N), which can offer better $(T, \delta, \epsilon)$-privacy than state-of-the-art works. Furthermore, we propose a method of proving the basic security properties of PoS in the noise setting, which can minimize the impact of the noise on the security threshold. This method can also be applied to the setting of PoS with variable stakes, which is of independent interest.
Expand
David Balbás, Daniel Collins, Phillip Gajland
ePrint Report ePrint Report
Developing end-to-end encrypted instant messaging solutions for group conversations is an ongoing challenge that has garnered significant attention from practitioners and the cryptographic community alike. Notably, industry-leading messaging apps such as WhatsApp and Signal Messenger have adopted the Sender Keys protocol, where each group member shares their own symmetric encryption key with others. Despite its widespread adoption, Sender Keys has never been formally modelled in the cryptographic literature, raising the following natural question:

What can be proven about the security of the Sender Keys protocol, and how can we practically mitigate its shortcomings?

In addressing this question, we first introduce a novel security model to suit protocols like Sender Keys, deviating from conventional group key agreement-based abstractions. Our framework allows for a natural integration of two-party messaging within group messaging sessions that may be of independent interest. Leveraging this framework, we conduct the first formal analysis of the Sender Keys protocol, and prove it satisfies a weak notion of security. Towards improving security, we propose a series of efficient modifications to Sender Keys without imposing significant performance overhead. We combine these refinements into a new protocol that we call Sender Keys+, which may be of interest both in theory and practice.
Expand
Dmitrii Koshelev
ePrint Report ePrint Report
This article aims to speed up (the precomputation stage of) multi-scalar multiplication (MSM) on ordinary elliptic curves of $j$-invariant $0$ with respect to specific ''independent'' (a.k.a. ''basis'') points. For this purpose, so-called Mordell--Weil lattices (up to rank $8$) with large kissing numbers (up to $240$) are employed. In a nutshell, the new approach consists in obtaining more efficiently a considerable number (up to $240$) of certain elementary linear combinations of the ``independent'' points. By scaling the point (re)generation process, it is thus possible to get a significant performance gain. As usual, the resulting curve points can be then regularly used in the main stage of an MSM algorithm to avoid repeating computations. Seemingly, this is the first usage of lattices with large kissing numbers in cryptography, while such lattices have already found numerous applications in other mathematical domains. Without exaggeration, the article results can strongly affect performance of today's real-world elliptic cryptography, since MSM is a widespread primitive (often the unique bottleneck) in modern protocols. Moreover, the new (re)generation technique is prone to further improvements by considering Mordell--Weil lattices with even greater kissing numbers.
Expand
Ziqi Zhu, Kai Zhang, Junqing Gong, Haifeng Qian
ePrint Report ePrint Report
This paper presents the first generic black-box construction of registered attribute-based encryption (Reg-ABE) via predicate encoding [TCC'14]. The generic scheme is based on $k$-Lin assumption in the prime-order bilinear group and implies the following concrete schemes that improve existing results:

- the first Reg-ABE scheme for span program in the prime-order group; prior work uses composite-order group;

- the first Reg-ABE scheme for zero inner-product predicate from $k$-Lin assumption; prior work relies on generic group model (GGM);

- the first Reg-ABE scheme for arithmetic branching program (ABP) which has not been achieved previously.

Technically, we follow the blueprint of Hohenberger et al. [EUROCRYPT'23] but start from the prime-order dual-system ABE by Chen et al. [EUROCRYPT'15], which transforms a predicate encoding into an ABE. The proof follows the dual-system method in the context of Reg-ABE: we conceptually consider helper keys as secret keys; furthermore, malicious public keys are handled via pairing-based quasi-adaptive non-interactive zero-knowledge argument by Kiltz and Wee [EUROCRYPT'15].
Expand
Charles Gouert, Dimitris Mouris, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
As cloud computing continues to gain widespread adoption, safeguarding the confidentiality of data entrusted to third-party cloud service providers becomes a critical concern. While traditional encryption methods offer protection for data at rest and in transit, they fall short when it comes to where it matters the most, i.e., during data processing. To address this limitation, we present HELM, a framework for privacy-preserving data processing using homomorphic encryption. HELM automatically transforms arbitrary programs expressed in a Hardware Description Language (HDL), such as Verilog, into equivalent homomorphic circuits, which can then be efficiently evaluated using encrypted inputs. HELM features two modes of encrypted evaluation: a) a gate mode that consists of standard Boolean gates, and b) a lookup table mode which significantly reduces the size of the circuit by combining multiple gates into lookup tables. Finally, HELM introduces a scheduler that enables embarrassingly parallel processing in the encrypted domain. We evaluate HELM with the ISCAS'85 and ISCAS'89 benchmark suites as well as real-world applications such as AES and image filtering. Our results outperform prior works by up to $65\times$.
Expand
Jack Doerner, Yashvanth Kondi, Leah Namisa Rosenbloom
ePrint Report ePrint Report
We investigate the conditions under which straight-line extractable NIZKs in the random oracle model (i.e. without a CRS) permit multiparty realizations that are black-box in the same random oracle. We show that even in the semi-honest setting, any MPC protocol to compute such a NIZK cannot make black-box use of the random oracle or a hash function instantiating it if security against all-but-one corruptions is desired, unless the size of the NIZK grows with the number of parties. This presents a fundamental barrier to constructing efficient protocols to securely distribute the computation of NIZKs (and signatures) based on MPC-in-the-head, PCPs/IOPs, and sigma protocols compiled with transformations due to Fischlin, Pass, or Unruh.

When the adversary is restricted to corrupt only a constant fraction of parties, we give a positive result by means of a tailored construction, which demonstrates that our impossibility does not extend to weaker corruptions models in general.
Expand
Jiaxin Pan, Benedikt Wagner, Runzhi Zeng
ePrint Report ePrint Report
We give a tighter security proof for authenticated key exchange (AKE) protocols that are generically constructed from key encapsulation mechanisms (KEMs) in the quantum random oracle model (QROM). Previous works (Hövelmanns et al., PKC 2020) gave reductions for such a KEM-based AKE protocol in the QROM to the underlying primitives with square-root loss and a security loss in the number of users and total sessions. Our proof is much tighter and does not have square-root loss. Namely, it only loses a factor depending on the number of users, not on the number of sessions.

Our main enabler is a new variant of lossy encryption which we call parameter lossy encryption. In this variant, there are not only lossy public keys but also lossy system parameters. This allows us to embed a computational assumption into the system parameters, and the lossy public keys are statistically close to the normal public keys. Combining with the Fujisaki-Okamoto transformation, we obtain the first tightly IND-CCA secure KEM in the QROM in a multi-user (without corruption), multi-challenge setting.

Finally, we show that a multi-user, multi-challenge KEM implies a square-root-tight and session-tight AKE protocol in the QROM. By implementing the parameter lossy encryption tightly from lattices, we obtain the first square-root-tight and session-tight AKE from lattices in the QROM.
Expand

15 September 2023

Peter Campbell
ePrint Report ePrint Report
The National Cyber Security Centre (NCSC) is the government organisation responsible for mitigating cyber security risks to the UK. Our work securing UK public- and private-sector networks involves (amongst many other security measures) research into cryptographic design, primarily to protect data requiring long-term security or data for which we have a particularly low tolerance of risk to its transmission and storage. Our algorithms prioritise robustness over other important considerations, such as performance, more highly than other designs.

We present GLEVIAN and VIGORNIAN: two AEAD modes with proofs of beyond-birthday security, security against nonce misuse, and against the release of unverified plaintext – both of the latter in strong notions of these security properties. We discuss our hierarchy of requirements for AEAD modes, and the rationale for the design choices made.

GLEVIAN and VIGORNIAN demonstrate we can achieve significantly improved robustness over GCM for use cases where some performance degradation is acceptable. We are not aware of other designs offering exactly the security properties of GLEVIAN and VIGORNIAN, and are publishing our designs to support the research that will inform the recently announced effort by NIST to standardise new modes of operation. We believe our work could be of interest to those with use cases similar to ours, and we offer suggestions for future research that might build on the work in this paper.
Expand
Benny Applebaum, Oded Nir
ePrint Report ePrint Report
A major open problem in information-theoretic cryptography is to obtain a super-polynomial lower bound for the communication complexity of basic cryptographic tasks. This question is wide open even for very powerful non-interactive primitives such as private information retrieval (or locally-decodable codes), general secret sharing schemes, conditional disclosure of secrets, and fully-decomposable randomized encoding (or garbling schemes). In fact, for all these primitives we do not even have super-linear lower bounds. Furthermore, it is unknown how to relate these questions to each other or to other complexity-theoretic questions.

In this note, we relate all these questions to the classical topic of query/space trade-offs, lifted to the setting of interactive proof systems. Specifically, we consider the following Advisor-Verifier-Prover (AVP) game: First, a function $f$ is given to the advisor who computes an advice $a$. Next, an input $x$ is given to the verifier and to the prover who claims that $f(x)=1$. The verifier should check this claim via a single round of interaction based on the private advice $a$ and without having any additional information on $f$. We focus on the case where the prover is laconic and communicates only a constant number of bits, and, mostly restrict the attention to the simplest, purely information-theoretic setting, where all parties are allowed to be computationally unbounded. The goal is to minimize the total communication complexity which is dominated by the length of the advice plus the length of the verifier's query.

As our main result, we show that a super-polynomial lower bound for AVPs implies a super-polynomial lower bound for a wide range of information-theoretic cryptographic tasks. In particular, we present a communication-efficient transformation from any of the above primitives into an AVP protocol. Interestingly, each primitive induces some additional property over the resulting protocol. Thus AVP games form a new common yardstick that highlights the differences between all the above primitives.

Equipped with this view, we revisit the existing (somewhat weak) lower bounds for the above primitives, and show that many of these lower bounds can be unified by proving a single counting-based lower bound on the communication of AVPs, whereas some techniques are inherently limited to specific domains. The latter is shown by proving the first polynomial separations between the complexity of secret-sharing schemes and conditional disclosure of secrets and between the complexity of randomized encodings and conditional disclosure of secrets.
Expand
Jan Lauinger, Jens Ernstberger, Andreas Finkenzeller, Sebastian Steinhorst
ePrint Report ePrint Report
TLS oracles guard the transition of web data from an authenticated session between a client and a server to a data representation that any third party can verify. Current TLS oracles resolve weak security assumptions with cryptographic algorithms that provide strong security guarantees (e.g., maliciously secure two-party computation). However, we notice that the conditions and characteristics of TLS 1.3 allow for reconsidering security assumptions. Our work shows that the deployment of semi-honest two-party computation is feasible with a single exception, while retaining equivalent security properties. Further, we introduce a new parity checksum construction to decouple the integrity verification over AEAD stream ciphers into dedicated proof systems and improve end-to-end performance benchmarks. We achieve a selective and privacy-preserving data opening on 16 kB of TLS 1.3 data in 2.11 seconds and open 10x more data compared to related approaches. Thus, our work sets new boundaries for privacy-preserving TLS 1.3 data proofs.
Expand
Nir bitansky, Tomer Solomon
ePrint Report ePrint Report
Homomorphic encryption is a central object in modern cryptography, with far-reaching applications. Constructions supporting homomorphic evaluation of arbitrary Boolean circuits have been known for over a decade, based on standard lattice assumptions. However, these constructions are leveled, meaning that they only support circuits up to some a-priori bounded depth. These leveled constructions can be bootstrapped into fully homomorphic ones, but this requires additional circular security assumptions, which are construction-dependent, and where reductions to standard lattice assumptions are no longer known. Alternative constructions are known based on indistinguishability obfuscation, which has been recently constructed under standard assumptions. However, this alternative requires subexponential hardness of the underlying primitives.

We prove a new bootstrapping theorem relying on functional encryption, which is known based on standard polynomial hardness assumptions. As a result we obtain the first fully homomorphic encryption scheme that avoids both circular security assumptions and super-polynomial hardness assumptions. The construction is secure against uniform adversaries, and can be made non-uniformly secure, under a widely-believed worst-case complexity assumption (essentially that non-uniformity does not allow arbitrary polynomial speedup).

At the heart of the construction is a new proof technique based on cryptographic puzzles. Unlike most cryptographic reductions, our security reduction does not fully treat the adversary as a black box, but rather makes explicit use of its running time (or circuit size).
Expand
Karim M. Abdellatif, Olivier Hériveaux
ePrint Report ePrint Report
DeepCover is a secure authenticator circuit family developed by Analog Devices. It was designed to provide cryptographic functions, true random number generation, and EEPROM secure storage. DS28C36 is one of the DeepCover family, which is widely used in secure boot and secure download for IoT. It has been recently deployed in the Coldcard Mk4 hardware wallet as a second secure element to enhance its security. In this paper, we present for the first time, a detailed evaluation for the DS28C36 secure EEPROM against Laser Fault Injection (LFI). In the context of a black box approach, we prove by experimental results that the chip resists single fault attacks. In order to overcome this, we present the use of leakage detection such as Welch’s T-test to facilitate finding the correct moments for injecting successful faults, which is not common in Fault Injection (FI) as this method has been used only for Side-Channel Attacks (SCAs). By using this knowledge, we found two moments for injecting laser pulses to extract the protected EEPROM user pages with 99% success rate. The attack can be reproduced within a day. The presented attack negatively impacts the users of DS28C36 (including Coldcard Mk4).
Expand
Koji Nuida, Tomoko Adachi
ePrint Report ePrint Report
Latin squares are a classical and well-studied topic of discrete mathematics, and recently Takeuti and Adachi (IACR ePrint, 2023) proposed (2,n)-threshold secret sharing based on mutually orthogonal Latin squares (MOLS). Hence efficient constructions of as large sets of MOLS as possible are also important from practical viewpoints. In this letter, we determine the maximum number of MOLS among a known class of Latin squares defined by weighted sums. We also mention some known property of Latin squares interpreted via the relation to secret sharing and a connection of Takeuti-Adachi's scheme to Shamir's secret sharing scheme.
Expand
Giuseppe Manzoni
ePrint Report ePrint Report
In the context of circuits leaking the internal state, there are various models to analyze what the adversary can see, like the $p$-random probing model in which the adversary can see the value of each wire with probability $p$. In this model, for a fixed $p$, it's possible to reach an arbitrary security by 'expanding' a stateless circuit via iterated compilation, reaching a security of $2^{-\kappa}$ with a polynomial size in $\kappa$.

The existing proofs of the expansion work by first compiling the gadgets multiple times, and then by compiling the circuit with the resulting gadgets while assuming the worst from the original circuit. Instead, we reframe the expansion as a security reduction from the compiled circuit to the original one. Additionally, we extend it to support a broader range of encodings, and arbitrary probabilistic gates with an arbitrary number of inputs and outputs.

This allows us to obtain two concrete results: (i) At the cost of an additional size factor $\mathcal{O}(\log(d)^3)$, any $d$-probing secure compiler can be used to produce stateless circuits with security $2^{-d}$ against any adversary that sees all wires with a constant SD-noise of $2^{-7.41}/p$, where $p$ is the characteristic of the circuit's field. (ii) Any $n$-shares compiler with $(t,f)$-RPE gadgets needs $t+1$ (which in practice is $\lceil\frac{n}{2}\rceil$) randoms in the random gadget instead of $n$.
Expand
Gideon Samid
ePrint Report ePrint Report
This article evaluates the innovation landscape facing the challenge of generating fresh shared randomness for cryptographic key exchange and various cyber security protocols. It discusses the main innovation thrust today, focused on quantum entanglement and on efficient engineering solutions to BB84, and its related alternatives. This innovation outlook highlights non-quantum solutions, and describes NEPSAR – a mechanical complexity based solution, which is applicable to any number of key sharing parties. Short-lived secret keys are also mentioned, as well as far-fetched innovation routes.
Expand
Minki Hhan, Aaram Yun
ePrint Report ePrint Report
In Crypto 2019, Zhandry showed how to define compressed oracles, which record quantum superposition queries to the quantum random oracle. In this paper, we extend Zhandry's compressed oracle technique to non-uniformly distributed functions with independently sampled outputs. We define two quantum oracles $\mathsf{CStO}_D$ and $\mathsf{CPhsO}_D$, which are indistinguishable to the non-uniform quantum random oracle where quantum access is given to a random function $H$ whose images $H(x)$ are sampled from a probability distribution $D$ independently for each $x$. We show that these compressed oracles record the adversarial quantum superposition queries. Also, we re-prove the optimality of Grover search and the collision resistance of non-uniform random functions, using our extended compressed oracle technique.
Expand
◄ Previous Next ►