International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

26 September 2022

David Naccache, Ofer Yifrach-Stav
ePrint Report ePrint Report
This note describes an observation discovered during a failed cryptanalysis attempt.

Let $P(x,y)$ be a bivariate polynomial with coefficients in $\mathbb{C}$. Form the $n\times n$ matrices $L(n)$ whose elements are defined by $P(i,j)$. Define the matrices $M(n)=L(n)-\mbox{ID}_n$.

It appears that $\mu(n)=(-1)^n\det(M_n)$ is a polynomial in $n$ that we did not characterize.

We provide a numerical example.
Expand
Nir Bitansky, Arka Rai Choudhuri, Justin Holmgren, Chethan Kamath, Alex Lombardi, Omer Paneth, Ron D. Rothblum
ePrint Report ePrint Report
One of the most fundamental results in game theory is that every finite strategic game has a Nash equilibrium, an assignment of (randomized) strategies to players with the stability property that no individual player can benefit from deviating from the assigned strategy. It is not known how to efficiently compute such a Nash equilibrium --- the computational complexity of this task is characterized by the class PPAD, but the relation of PPAD to other problems and well-known complexity classes is not precisely understood. In recent years there has been mounting evidence, based on cryptographic tools and techniques, showing the hardness of PPAD. We continue this line of research by showing that PPAD is as hard as learning with errors (LWE) and the iterated squaring (IS) problem, two standard problems in cryptography. Our work improves over prior hardness results that relied either on (1) sub-exponential assumptions, or (2) relied on ``obfustopia,'' which can currently be based on a particular combination of three assumptions. Our work additionally establishes public-coin hardness for PPAD (computational hardness for a publicly sampleable distribution of instances) that seems out of reach of the obfustopia approach. Following the work of Choudhuri et al. (STOC 2019) and subsequent works, our hardness result is obtained by constructing an unambiguous and incrementally-updatable succinct non-interactive argument for IS, whose soundness relies on polynomial hardness of LWE. The result also implies a verifiable delay function with unique proofs, which may be of independent interest.
Expand
Bowen LIU, Qiang TANG
ePrint Report ePrint Report
Modern SVD computation dates back to work in the 1960s that proposed the basis for the eigensystem package and linear algebra package routines. As a result of a long history of research, SVD is now widely applied in various scenarios, such as recommendation system and principal component analysis. Furthermore, federated SVD has emerged as a prevalent privacy-preserving technique. For example, the raw data are not required to be exchanged among different parties; instead, each party trains and processes locally and shares intermediate result. In general, there are two main categories: SVD over horizontally and vertically partitioned data. Imagine a dataset matrix M, where each row stands for a record from a data subject, and the columns stand for the attributes/features of the records. In the horizontally partitioned setting, each party holds a disjoint subset of the rows of M. While in the vertically partitioned setting, each party has a disjoint subset of the columns of M for all the rows. In real-world applications, the horizontally partitioned setting is much more common than the vertically partitioned setting. In this paper, we have proposed a privacy-preserving federated SVD scheme with secure aggregation. The proposed scheme can aggregate SVD results (eigenspace) from different devices and synchronise the aggregation result with all devices while maintaining privacy protection.
Expand
Basavesh Ammanaghatta Shivakumar, Gilles Barthe, Benjamin Grégoire, Vincent Laporte, Tiago Oliveira, Swarn Priya, Peter Schwabe, Lucas Tabary-Maujean
ePrint Report ePrint Report
The current gold standard of cryptographic software is to write efficient libraries with systematic protections against timing attacks. In order to meet this goal, cryptographic engineers increasingly use high-assurance cryptography tools. These tools guide programmers and provide rigorous guarantees that can be verified independently by library users. However, high-assurance tools reason about overly simple execution models that elide micro-architectural leakage. Thus, implementations validated by high-assurance cryptography tools remain potentially vulnerable to micro-architectural attacks such as Spectre or Meltdown. Moreover, proposed countermeasures are not used in practice due to performance overhead.

We propose, analyze, implement and evaluate an approach for writing efficient cryptographic implementations that are protected against Spectre v1 attacks. Our approach ensures speculative constant-time, an information flow property which guarantees that programs are protected against Spectre v1. Speculative constant-time is enforced by means of a (value-dependent) information flow type system. The type system tracks security levels depending on whether execution is misspeculating. We implement our approach in the Jasmin framework for high assurance cryptography, and use it for protecting all implementations of an experimental cryptographic library that includes highly optimized implementations of symmetric primitives, of elliptic-curve cryptography, and of Kyber, a lattice-based KEM recently selected by NIST for standardization. The performance impact of our protections is very low; for example, less than 1% for Kyber and essentially zero for X25519.
Expand
Prabhanjan Ananth, Kai-Min Chung, Xiong Fan, Luowen Qian
ePrint Report ePrint Report
In recent years, functional encryption (FE) has established itself as one of the fundamental primitives in cryptography. The choice of model of computation to represent the functions associated with the functional keys plays a critical role in the complexity of the algorithms of an FE scheme. Historically, the functions are represented as circuits. However, this results in the decryption time of the FE scheme growing proportional to not only the worst case running time of the function but also the size of the input, which in many applications can be quite large.

In this work, we present the first construction of a public-key collusion-resistant FE scheme, where the functions, associated with the keys, are represented as random access machines (RAMs). We base the security of our construction on the existence of: (i) public-key collusion- resistant FE for circuits and, (ii) public-key doubly-efficient private-information retrieval [Boyle et al., Canetti et al., TCC 2017]. Our scheme enjoys many nice efficiency properties, including input-specific decryption time.

We also show how to achieve FE for RAMs in the bounded-key setting with weaker efficiency guarantees from laconic oblivious transfer, which can be based on standard cryptographic assumptions. En route to achieving our result, we present conceptually simpler constructions of succinct garbling for RAMs [Canetti et al., Chen et al., ITCS 2016] from weaker assumptions.
Expand
Bin Liu, Antonis Michalas, Bogdan Warinschi
ePrint Report ePrint Report
In this paper, we follow the line of existing study on cryptographic enforcement of Role-Based Access Control (RBAC). Inspired by the study of the relation between the existing security definitions for such system, we identify two different types of attacks which cannot be captured by the existing ones. Therefore, we propose two new security definitions towards the goal of appropriately modelling cryptographic enforcement of Role-Based Access Control policies and study the relation between our new definitions and the existing ones. In addition, we show that the cost of supporting dynamic policy update is inherently expensive by presenting two lower bounds for such systems which guarantee correctness and secure access.
Expand
Long Nie, ShaoWen Yao, Jing Liu
ePrint Report ePrint Report
In most homomorphic encryption schemes based on the RLWE, the native plaintexts are represented as polynomials in a ring $Z_t[x]/x^N+1$ where $t$ is a plaintext modulus and $x^N+1$ is a cyclotomic polynomial with degree power of two. An encoding scheme should be used to transform some natural data types(such as integers and rational numbers) into polynomials in the ring. After a homomorphic computation on the polynomial is finished, the decoding procedure is invoked to obtain the result. However, conditions for decoding correctly are strict in a way. For example, the overflows of computation modulo both the plaintext modulus $t$ and the cyclotomic polynomial $x^N+1$ will result in a unexpected result for decoding. The reason is that decoding the part which is discarded by modular reduction is not 0. We combine number theory transformation with Hensel Codes to construct a scheme. Intuitively, decoding the discarded part will yield 0 so the limitations are overcome naturally in our scheme. On the other hand, rational numbers can be handled with high precision in parallel.
Expand
Ittai Abraham, Gilad Asharov, Shravani Patil, Arpita Patra
ePrint Report ePrint Report
Broadcast is an essential primitive for secure computation. We focus in this paper on optimal resilience (i.e., when the number of corrupted parties $t$ is less than a third of the computing parties $n$), and with no setup or cryptographic assumptions.

While broadcast with worst case $t$ rounds is impossible, it has been shown [Feldman and Micali STOC'88, Katz and Koo CRYPTO'06] how to construct protocols with expected constant number of rounds in the private channel model. However, those constructions have large communication complexity, specifically $\mathcal{O}(n^2L+n^6\log n)$ expected number of bits transmitted for broadcasting a message of length $L$. This leads to a significant communication blowup in secure computation protocols in this setting.

In this paper, we substantially improve the communication complexity of broadcast in constant expected time. Specifically, the expected communication complexity of our protocol is $\mathcal{O}(nL+n^4\log n)$. For messages of length $L=\Omega(n^3 \log n)$, our broadcast has no asymptotic overhead (up to expectation), as each party has to send or receive $\mathcal{O}(n^3 \log n)$ bits. We also consider parallel broadcast, where $n$ parties wish to broadcast $L$ bit messages in parallel. Our protocol has no asymptotic overhead for $L=\Omega(n^2\log n)$, which is a common communication pattern in perfectly secure MPC protocols. For instance, it is common that all parties share their inputs simultaneously at the same round, and verifiable secret sharing protocols require the dealer to broadcast a total of $\mathcal{O}(n^2\log n)$ bits.

As an independent interest, our broadcast is achieved by a packed verifiable secret sharing, a new notion that we introduce. We show a protocol that verifies $\mathcal{O}(n)$ secrets simultaneously with the same cost of verifying just a single secret. This improves by a factor of $n$ the state-of-the-art.
Expand
Pedro Branco, Nico Döttling, Stella Wohnig
ePrint Report ePrint Report
Ring signatures allow a user to sign messages on behalf of an ad hoc set of users - a ring - while hiding her identity. The original motivation for ring signatures was whistleblowing [Rivest et al. ASIACRYPT'01]: a high government employee can anonymously leak sensitive information while certifying that it comes from a reliable source, namely by signing the leak. However, essentially all known ring signature schemes require the members of the ring to publish a structured verification key that is compatible with the scheme. This creates somewhat of a paradox since, if a user does not want to be framed for whistleblowing, they will stay clear of signature schemes that support ring signatures. In this work, we formalize the concept of universal ring signatures (URS). A URS enables a user to issue a ring signature with respect to a ring of users, independently of the signature schemes they are using. In particular, none of the verification keys in the ring need to come from the same scheme. Thus, in principle, URS presents an effective solution for whistleblowing.

The main goal of this work is to study the feasibility of URS, especially in the standard model (i.e. no random oracles or common reference strings). We present several constructions of URS, offering different trade-offs between assumptions required, the level of security achieved, and the size of signatures: * Our first construction is based on superpolynomial hardness assumptions of standard primitives. It achieves compact signatures. That means the size of a signature depends only logarithmically on the size of the ring and on the number of signature schemes involved. * We then proceed to study the feasibility of constructing URS from standard polynomially-hard assumptions only. We construct a non-compact URS from witness encryption and additional standard assumptions. * Finally, we show how to modify the non-compact construction into a compact one by relying on indistinguishability obfuscation.
Expand
Brian Chen, Yevgeniy Dodis, Esha Ghosh, Eli Goldin, Balachandar Kesavan, Antonio Marcedone, Merry Ember Mou
ePrint Report ePrint Report
Key Transparency (KT) systems allow end-to-end encrypted service providers (messaging, calls, etc.) to maintain an auditable directory of their users’ public keys, producing proofs that all participants have a consistent view of those keys, and allowing each user to check updates to their own keys. KT has lately received a lot of attention, in particular its privacy preserving variants, which also ensure that users and auditors do not learn anything beyond what is necessary to use the service and keep the service provider accountable.

Abstractly, the problem of building such systems reduces to constructing so-called append-only Zero-Knowledge Sets (aZKS). Unfortunately, existing aZKS (and KT) solutions do not allow to adequately restore the privacy guarantees after a server compromise, a form of Post-Compromise Security (PCS), while maintaining the auditability properties. In this work we address this concern through the formalization of an extension of aZKS called Rotatable ZKS (RZKS). In addition to providing PCS, our notion of RZKS has several other attractive features, such as a stronger (extractable) soundness notion, and the ability for a communication party with out-of-date data to efficiently “catch up” to the current epoch while ensuring that the server did not erase any of the past data.

Of independent interest, we also introduce a new primitive called a Rotatable Verifiable Random Function (VRF), and show how to build RZKS in a modular fashion from a rotatable VRF, ordered accumulator, and append-only vector commitment schemes.
Expand
Behzad Abdolmaleki, Nils Fleischhacker, Vipul Goyal, Abhishek Jain, Giulio Malavolta
ePrint Report ePrint Report
We revisit the well-studied problem of preventing steganographic communication in multi-party communications. While this is known to be a provably impossible task, we propose a new model that allows circumventing this impossibility. In our model, the parties first publish a single message during an honest non-interactive pre-processing phase and then later interact in an execution phase. We show that in this model, it is indeed possible to prevent any steganographic communication in zero-knowledge protocols. Our solutions rely on standard cryptographic assumptions.
Expand
Muhammad Haris Mughees, Ling Ren
ePrint Report ePrint Report
This paper studies Batch Private Information Retrieval (BatchPIR), a variant of private information retrieval (PIR) where the client wants to retrieve multiple entries from the server in one batch. BatchPIR matches the use case of many practical applications and holds the potential for substantial efficiency improvements over PIR in terms of amortized cost per query. Existing BatchPIR schemes have achieved decent computation efficiency but have not been able to improve communication efficiency at all. In this paper, we present the first BatchPIR protocol that is efficient in both computation and communication for a variety of database configurations. Specifically, to retrieve a batch of 256 entries from a database with one million entries of 256 bytes each, the communication cost of our scheme is 7.2$\sim$75x better than state-of-the-art solutions.
Expand
Dana Dachman-Soled, Julian Loss, Adam O'Neill, Nikki Sigurdson
ePrint Report ePrint Report
We investigate the relationship between the classical RSA and factoring problems when preprocessing is considered. In such a model, adversaries can use an unbounded amount of precomputation to produce an "advice" string to then use during the online phase, when a problem instance becomes known. Previous work (e.g., [Bernstein, Lange ASIACRYPT '13]) has shown that preprocessing attacks significantly improve the runtime of the best-known factoring algorithms. Due to these improvements, we ask whether the relationship between factoring and RSA fundamentally changes when preprocessing is allowed. Specifically, we investigate whether there is a superpolynomial gap between the runtime of the best attack on RSA with preprocessing and on factoring with preprocessing.

Our main result rules this out with respect to algorithms in a natural adaptation of the generic ring model to the preprocessing setting. In particular, in this setting we show the existence of a factoring algorithm (albeit in the random oracle model) with polynomially related parameters, for any setting of RSA parameters.
Expand
John Chan, Phillip Rogaway
ePrint Report ePrint Report
We provide a strong definition for committing authenticated-encryption (cAE), as well as a framework that encompasses earlier and weaker definitions. The framework attends not only to what is committed but also the extent to which the adversary knows or controls keys. We slot into our framework strengthened cAE-attacks on GCM and OCB. Our main result is a simple and efficient construction, CTX, that makes a nonce-based AE (nAE) scheme committing. The transformed scheme achieves the strongest security notion in our framework. Just the same, the added computational cost (on top of the nAE scheme's cost) is a single hash over a short string, a cost independent of the plaintext's length. And there is no increase in ciphertext length compared to the base nAE scheme. That such a thing is possible, let alone easy, upends the (incorrect) intuition that you can't commit to a plaintext or ciphertext without hashing one or the other. And it motivates a simple and practical tweak to AE-schemes to make them committing.
Expand
Wouter Castryck, Thomas Decru, Marc Houben, Frederik Vercauteren
ePrint Report ePrint Report
We address three main open problems concerning the use of radical isogenies, as presented by Castryck, Decru and Vercauteren at Asiacrypt 2020, in the computation of long chains of isogenies of fixed, small degree between elliptic curves over finite fields. Firstly, we present an interpolation method for finding radical isogeny formulae in a given degree $N$, which by-passes the need for factoring division polynomials over large function fields. Using this method, we are able to push the range for which we have formulae at our disposal from $N \leq 13$ to $N \leq 37$ (where in the range $18 \leq N \leq 37$ we have restricted our attention to prime powers). Secondly, using a combination of known techniques and ad-hoc manipulations, we derive optimized versions of these formulae for $N \leq 19$, with some instances performing more than twice as fast as their counterparts from 2020. Thirdly, we solve the problem of understanding the correct choice of radical when walking along the surface between supersingular elliptic curves over $\mathbb{F}_p$ with $p \equiv 7 \bmod 8$; this is non-trivial for even $N$ and was settled for $N = 2$ and $N = 4$ only, in the latter case by Onuki and Moriya at PKC 2022. We give a conjectural statement for all even $N$ and prove it for $N \leq 14$. The speed-ups obtained from these techniques are substantial: using $16$-isogenies, the computation of long chains of $2$-isogenies over $512$-bit prime fields can be accelerated by a factor $3$, and the previous implementation of CSIDH using radical isogenies can be sped up by about $12\%$.
Expand
Xiangyu Liu, Shengli Liu, Dawu Gu
ePrint Report ePrint Report
We define the security notion of (strong) collision resistance for chameleon hash functions in the multi-user setting ((S-)MU-CR security). We also present three constructions, CHF_dl, CHF_rsa and CHF_fac, and prove their tight S-MU-CR security based on the discrete logarithm, RSA and factoring assumptions, respectively. In applications, our tightly S-MU-CR secure chameleon hash functions help us to lift a signature scheme from (weak) unforgeability to strong unforgeability in the multi-user setting, and the security reduction is tightness preserving. Furthermore, they can also be used to construct tightly secure online/offline signatures, chameleon signatures and proxy signatures, etc., in the multi-user setting.
Expand
Harry Eldridge, Aarushi Goel, Matthew Green, Abhishek Jain, Maximilian Zinkus
ePrint Report ePrint Report
One-time programs, originally formulated by Goldwasser et al. [CRYPTO'08], are a powerful cryptographic primitive with compelling applications. Known solutions for one-time programs, however, require specialized secure hardware that is not widely available (or, alternatively, access to blockchains and very strong cryptographic tools).

In this work we investigate the possibility of realizing one-time programs from a recent and now more commonly available hardware functionality: the counter lockbox. A counter lockbox is a stateful functionality that protects an encryption key under a user-specified password, and enforces a limited number of incorrect guesses. Counter lockboxes have become widely available in consumer devices and cloud platforms.

We show that counter lockboxes can be used to realize one-time programs for general functionalities. We develop a number of techniques to reduce the number of counter lockboxes required for our constructions, that may be of independent interest.
Expand
Seonghak Kim, Minji Park, Jaehyung Kim, Taekyung Kim, Chohong Min
ePrint Report ePrint Report
Homomorphic encryption (HE) has opened an entirely new world up in the privacy-preserving use of sensitive data by conducting computations on encrypted data. Amongst many HE schemes targeting computation in various contexts, Cheon--Kim--Kim--Song (CKKS) scheme is distinguished since it allows computations for encrypted real number data, which have greater impact in real-world applications.

CKKS scheme is a levelled homomorphic encryption scheme, consuming one level for each homomorphic multiplication. When the level runs out, a special computational circuit called bootstrapping is required in order to conduct further multiplications. The algorithm proposed by Cheon et al. has been regarded as a standard way to do bootstrapping in the CKKS scheme, and it consists of the following four steps: ModRaise, CoeffToSlot, EvalMod and SlotToCoeff. However, the steps consume a number of levels themselves, and thus optimizing this extra consumption has been a major focus of the series of recent research.

Among the total levels consumed in the bootstrapping steps, about a half of them is spent in CoeffToSlot and SlotToCoeff steps to scale up the real number components of DFT matrices and round them to the nearest integers. Each scale-up factor is very large so that it takes up one level to rescale it down. Scale-up factors can be taken smaller to save levels, but the error of rounding would be transmitted to EvalMod and eventually corrupt the accuracy of bootstrapping.

EvalMod aims to get rid of the superfluous $qI$ term from a plaintext $pt + qI$ resulting from ModRaise, where $q$ is the bottom modulus and $I$ is a polynomial with small integer coefficients. EvalRound is referred to as its opposite, obtaining $qI$. We introduce a novel bootstrapping algorithm consisting of ModRaise, CoeffToSlot, EvalRound and SlotToCoeff, which yields taking smaller scale-up factors without the damage of rounding errors.
Expand
Aayush Gupta, Kobi Gurkan
ePrint Report ePrint Report
ZK-SNARKs (Zero Knowledge Succinct Noninteractive ARguments of Knowledge) are one of the most promising new applied cryptography tools: proofs allow anyone to prove a property about some data, without revealing that data. Largely spurred by the adoption of cryptographic primitives in blockchain systems, ZK-SNARKs are rapidly becoming computationally practical in real-world settings, shown by i.e. tornado.cash and rollups. These have enabled ideation for new identity applications based on anonymous proof-of-ownership. One of the primary technologies that would enable the jump from existing apps to such systems is the development of deterministic nullifiers.

Nullifiers are used as a public commitment to a specific anonymous account, to forbid actions like double spending, or allow a consistent identity between anonymous actions. We identify a new deterministic signature algorithm that both uniquely identifies the keypair, and keeps the account identity secret. In this work, we will define the full DDH-VRF construction, and prove uniqueness, secrecy, and existential unforgeability. We will also demonstrate a proof of concept of the nullifier.
Expand
Estuardo Alpirez Bock, Lukasz Chmielewski, Konstantina Miteloudi
ePrint Report ePrint Report
The Montgomery Ladder is widely used for implementing the scalar multiplication in elliptic curve cryptographic designs. This algorithm is efficient and provides a natural robustness against (simple) side-channel attacks. Previous works however showed that implementations of the Montgomery Ladder using Lopez-Dahab projective coordinates easily leak the value of the most significant bits of the secret scalar, which led to a full key recovery in an attack known as LadderLeak. In light of such leakage, we analyse further popular methods for implementing the Montgomery Ladder. We first consider open source software implementations of the X25519 protocol which implement the Montgomery Ladder based on the ladderstep algorithm from Düll et al. [15]. We confirm via power measurements that these implementations also easily leak the most significant scalar bits, even when implementing Z-coordinate ran- domisations. We thus propose simple modifications of the algorithm and its handling of the most significant bits and show the effectiveness of our modifications via experimental results. Particularly, our re-designs of the algorithm do not incurring significant efficiency penalties. As a second case study, we consider open source hardware implementations of the Montgomery Ladder based on the complete addition formulas for prime order elliptic curves, where we observe the exact same leakage. As we explain, the most significant bits in implementations of the complete addition formulas can be protected in an analogous way as we do for Curve25519 in our first case study.
Expand
◄ Previous Next ►