International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

16 February 2024

Hyeonbum Lee, Jae Hong Seo
ePrint Report ePrint Report
Nova is a new type of recursive proof system that uses folding scheme as its core building block. This brilliant idea of folding relations can significantly reduce recursion overhead. In this paper, we study some issues related to Nova's soundness proof that relies on the soundness of the folding scheme in a recursive manner.

First, its proof strategy, due to its recursive nature, inevitably expands the running time of the recursive extractor polynomially for each additional recursive step. This constrains Nova's soundness model to have only logarithmically bounded recursive steps, and so the soundness proof in this limited model does not guarantee the soundness for linear rounds in the security parameter, e.g., 128 rounds for 128 bit security. On the other hand, there are no known attacks on arbitrary depth recursion of Nova, leaving a gap between theoretical security guarantees and real-world attacks. We aim to bridge this gap in two opposite directions. In a negative direction, we present a recursive proof system that is unforgeable in a log-round model but forgeable if used in linear rounds. This shows that the soundness proof in the log-round model might tell nothing about real-world uses that require linearly long rounds. In a positive direction, we show that when Nova uses a specific group-based folding scheme, its knowledge soundness over polynomial rounds can be proven in the algebraic group model with our refinements. To the best of our knowledge, this is the first result to show Nova's polynomial rounds soundness.

Second, the folding scheme is converted non-interactively via the Fiat-Shamir transformation and then arithmetized into R1CS. Therefore, the soundness of Nova using the non-interactive folding scheme essentially relies on the heuristic random oracle instantiation in the standard model. In our new soundness proof for Nova in the algebraic group model, we replace this heuristics with a cryptographic hash function with special property. We view this hash function as an independent object of interest and expect it to help further understand the soundness of Nova.
Expand
Eugene Frimpong, Alexandros Bakas, Camille Foucault, Antonis Michalas
ePrint Report ePrint Report
Functional Encryption (FE) is a cutting-edge cryptographic technique that enables a user with a specific functional decryption key to determine a certain function of encrypted data without gaining access to the underlying data. Given its potential and the fact that FE is still a relatively new field, we set out to investigate how it could be applied to resource-constrained environments. This work presents what we believe to be the first lightweight FE scheme explicitly designed for resource-constrained devices. We also propose a use case protocol that demonstrates how our scheme can secure an Internet of Things (IoT) architecture where relevant devices collect data and securely deliver them to a storage server, where an analyst can request access to the encrypted data. Finally, we conduct thorough experiments on two commercially available resource-constrained devices to provide compelling evidence of our approach's practicality and efficiency. Although the results of our evaluations show that there is room for improvement in the proposed scheme, this work represents one of the first attempts to apply FE to the IoT setting that can directly impact people's daily lives and the everyday operations of organizations.
Expand
Seongtaek Chee, Kyung Chul Jeong, Tanja Lange, Nari Lee, Alex Pellegrini, Hansol Ryu
ePrint Report ePrint Report
We analyse Layered ROLLO-I, a code-based cryptosystem submitted to the Korean post quantum cryptography competition, of which four versions have been proposed. We show that the first two versions do not provide the claimed security against rank decoding attacks and give reductions to small instances of ROLLO-I for which such attacks are even more effective. Finally, we provide two efficient message recovery attacks, affecting every security level of the first three versions of Layered ROLLO-I and security levels 128 and 192 of the fourth version.
Expand
Changrui Mu, Shafik Nassar, Ron D. Rothblum, Prashant Nalini Vasudevan
ePrint Report ePrint Report
A zero-knowledge proof enables a prover to convince a verifier that $x \in S$, without revealing anything beyond this fact. By running a zero-knowledge proof $k$ times, it is possible to prove (still in zero-knowledge) that $k$ separate instances $x_1,\dots,x_k$ are all in $S$. However, this increases the communication by a factor of $k$. Can one do better? In other words, is (non-trivial) zero-knowledge batch verification for $S$ possible?

Recent works by Kaslasi et al. (TCC 2020, Eurocrypt 2021) show that any problem possessing a non-interactive statistical zero-knowledge proof (NISZK) has a non-trivial statistical zero-knowledge batch verification protocol. Their results had two major limitations: (1) to batch verify $k$ inputs of size $n$ each, the communication in their batch protocol is roughly $\textrm{poly}(n,\log{k})+O(k)$, which is better than the naive cost of $k \cdot \textrm{poly}(n)$ but still scales linearly with $k$, and, (2) the batch protocol requires $\Omega(k)$ rounds of interaction.

In this work we remove both of these limitations by showing that any problem in $NISZK$ has a non-interactive statistical zero-knowledge batch verification protocol with communication $\textrm{poly}(n,\log{k})$.
Expand
Ren Taguchi, Atsushi Takayasu
ePrint Report ePrint Report
Thus far, several papers estimated concrete quantum resources of Shor’s algorithm for solving a binary elliptic curve discrete logarithm problem. In particular, the complexity of computing quantum inversions over a binary field F2n is dominant when running the algorithm, where n is a degree of a binary elliptic curve. There are two major methods for quantum inversion, i.e., the quantum GCD-based inversion and the quantum FLT-based inversion. Among them, the latter method is known to require more qubits; however, the latter one is valuable since it requires much fewer Toffoli gates and less depth. When n = 571, Kim-Hong’s quantum GCD-based inversion algorithm (Quantum Information Processing 2023) and Taguchi-Takayasu’s quantum FLT-based inversion algorithm (CT-RSA 2023) require 3, 473 qubits and 8, 566 qubits, respectively. In contrast, for the same n = 571, the latter algorithm requires only 2.3% of Toffoli gates and 84% of depth compared to the former one. In this paper, we modify Taguchi-Takayasu’s quantum FLT-based inversion algorithm to reduce the required qubits. While Taguchi-Takayasu’s FLT-based inversion algorithm takes an addition chain for n−1 as input and computes a sequence whose number is the same as the length of the chain, our proposed algorithm employs an uncomputation step and stores a shorter one. As a result, our proposed algorithm requires only 3, 998 qubits for n = 571, which is only 15% more than Kim-Hong’s GCD-based inversion algorithm. Furthermore, our proposed algorithm preserves the advantage of FLT-based inversion since it requires only 3.7% of Toffoli gates and 77% of depth compared to Kim-Hong’s GCD-based inversion algorithm for n = 571.
Expand
Surya Mathialagan, Spencer Peters, Vinod Vaikuntanathan
ePrint Report ePrint Report
We study succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs) for the class $\mathsf{UP}$ in the reusable designated verifier model. $\mathsf{UP}$ is an expressive subclass of $\mathsf{NP}$ consisting of all $\mathsf{NP}$ languages where each instance has at most one witness; a designated verifier SNARG (dvSNARG) is one where verification of the SNARG proof requires a private verification key; and such a dvSNARG is reusable if soundness holds even against a malicious prover with oracle access to the (private) verification algorithm. Our main results are as follows.

(1) A reusably and adaptively sound zero-knowledge (zk) dvSNARG for $\mathsf{UP}$, from subexponential LWE and evasive LWE (a relatively new but popular variant of LWE). Our SNARGs achieve very short proofs of length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error.

(2) A generic transformation that lifts any ``Sahai-Waters-like'' (zk) SNARG to an adaptively sound (zk) SNARG, in the designated-verifier setting. In particular, this shows that the Sahai-Waters SNARG for $\mathsf{NP}$ is adaptively sound in the designated verifier setting, assuming subexponential hardness of the underlying assumptions. The resulting SNARG proofs have length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error. Our result sidesteps the Gentry-Wichs barrier for adaptive soundness by employing an exponential-time security reduction.

(3) A generic transformation that lifts any adaptively sound (zk) SNARG for $\mathsf{UP}$ to an adaptively sound (zk) SNARK for $\mathsf{UP}$, while preserving zero-knowledge. The resulting SNARK achieves the strong notion of black-box extraction. There are barriers to achieving such SNARKs for all of $\mathsf{NP}$ from falsifiable assumptions, so our restriction to $\mathsf{UP}$ is, in a sense, necessary.

Applying (3) to our SNARG for $\mathsf{UP}$ from evasive LWE (1), we obtain a reusably and adaptively sound designated-verifier zero-knowledge SNARK for $\mathsf{UP}$ from subexponential LWE and evasive LWE. Moreover, applying both (2) and (3) to the Sahai-Waters SNARG, we obtain the same result from LWE, subexponentially secure one-way functions, and subexponentially secure indistinguishability obfuscation. Both constructions have succinct proofs of size $\mathsf{poly}(\lambda)$. These are the first SNARK constructions (even in the designated-verifier setting) for a non-trivial subset of $\mathsf{NP}$ from (sub-exponentially) falsifiable assumptions.
Expand
Keita Emura, Shingo Sato, Atsushi Takayasu
ePrint Report ePrint Report
Keyed homomorphic public key encryption (KHPKE) is a variant of homomorphic public key encryption, where only users who have a homomorphic evaluation key can perform a homomorphic evaluation. Then, KHPKE satisfies the CCA2 security against users who do not have a homomorphic evaluation key, while it satisfies the CCA1 security against users who have the key. Thus far, several KHPKE schemes have been proposed under the standard Diffie-Hellman-type assumptions and keyed fully homomorphic encryption (KFHE) schemes have also been proposed from lattices. As a natural extension, there is an identity-based variant of KHPKE; however, the security is based on a $q$-type assumption and there are no attribute-based variants. Moreover, there are no identity-based variants of KFHE schemes due to the complex design of the known KFHE schemes. In this paper, we obtain two results for constructing the attribute-based variants. First, we propose an attribute-based KFHE (ABKFHE) scheme from lattices. We start by designing the first KFHE scheme secure solely under the LWE assumption in the standard model. Since the design is conceptually much simpler than known KFHE schemes, we just replace their building blocks with attribute-based ones and obtain the proposed ABKFHE schemes. Next, we propose an efficient attribute-based KHPKE (ABKHE) scheme from a pair encoding scheme (PES). Due to the benefit of PES, we obtain various ABKHE schemes that contain the first identity-based KHPKE scheme secure under the standard $k$-linear assumption and the first pairing-based ABKHE schemes supporting more expressive predicates.
Expand
Yilei Chen, Xinyu Mao
ePrint Report ePrint Report
Universal computational extractors (UCEs), introduced by Bellare, Hoang, and Keelveedhi [BHK13], can securely replace random oracles in various applications, including KDM-secure encryption, deterministic encryption, RSA-OAEP, etc. Despite its usefulness, constructing UCE in the standard model is challenging. The only known positive result is given by Brzuska and Mittelbach [BM14], who construct UCE with strongly computationally unpredictable one-query source assuming indistinguishability obfuscation (iO) and the existence of point obfuscators with auxiliary input (AIPO); they also construct UCE with $q$-query sources assuming iO and composable AIPO. On the other hand, Brzuska, Farshim, and Mittelbach [BFM14] show that the most potent version of UCE does not exist, assuming the existence of iO.

In this paper, we construct UCE with strongly computationally unpredictable one-query sources from lattice assumptions based on the GGH15 encodings [GGH15], without using iO. Security is proven under the following assumptions: (1) LWE with subexponential hardness; (2) evasive LWE, which is a new assumption proposed by Wee [Wee22]; (3) the existence of AIPO in NC1. Our UCE directly implies a universal hardcore function that outputs a polynomial number of bits, giving the first lattice-based universal hardcore function without using iO. We also put forth a new primitive called obliviously programmable function as an intermediate abstraction; it makes our analysis more modularized and could be of independent interest
Expand
Nir Bitansky, Nathan Geier
ePrint Report ePrint Report
In an (α,β)-weak non-interactive zero knowledge (NIZK), the soundness error is at most α and the zero-knowledge error is at most β. Goyal, Jain, and Sahai (CRYPTO 2019) show that if α+β<1 for some constants α,β, then (α,β)-weak NIZK can be turned into fully-secure NIZK, assuming sub-exponentially-secure public-key encryption. We revisit the problem of NIZK amplification: – We amplify NIZK arguments assuming only polynomially-secure public-key encryption, for any constants α+β<1. – We amplify NIZK proofs assuming only one-way functions, for any constants α+β<1. – When the soundness error α is negligible to begin with, we can also amplify NIZK arguments assuming only one-way functions. Our results are based on the hidden-bits paradigm, and can be viewed as a reduction from NIZK amplification to the better understood problem of pseudorandomness amplification.
Expand
Sri Aravinda Krishnan Thyagarajan, Ke Wu, Pratik Soni
ePrint Report ePrint Report
Cleve's celebrated result (STOC'86) showed that a strongly fair multi-party coin-toss is impossible in the presence of majority-sized coalitions. Recently, however, a fascinating line of work studied a relaxed fairness notion called \emph{game-theoretic fairness}, which guarantees that no coalition should be incentivized to deviate from the prescribed protocol. A sequence of works has explored the feasibility of game-theoretic fairness for \emph{two-sided} coin-toss, and indeed demonstrated feasibility in the dishonest majority setting under standard cryptographic assumptions. In fact, the recent work of Wu, Asharov, and Shi (EUROCRYPT'22) completely characterized the regime where game-theoretic fairness is feasible. However, this line of work is largely restricted to two-sided coin-toss, and more precisely on a \emph{uniform} coin-toss (i.e., Bernoulli with parameter $1/2$). The only exceptions are the works on game-theoretically fair leader election, which can be viewed as a special case of uniform $n$-sided coin-toss where $n$ is the number of parties.

In this work, we \emph{initiate} the comprehensive study of game-theoretic fairness for multi-party \emph{sampling from general distributions}. In particular, for the case of $m$-sided \emph{uniform} coin-toss we give a nearly complete characterization of the regime in which game-theoretic fairness is feasible. Interestingly, contrary to standard fairness notions in cryptography, the composition of game-theoretically fair two-sided coin-toss protocols does not necessarily yield game-theoretically fair multi-sided coins. To circumvent this, we introduce new techniques compatible with game-theoretic fairness. In particular, we give the following results:

- We give a protocol from standard cryptographic assumptions that achieves game-theoretic fairness for uniform $m$-sided coin-toss against half- or more-sized adversarial coalitions. - To complement our protocol, we give a general impossibility result that establishes the optimality of our protocol for a broad range of parameters modulo an additive constant. Even in the worst-case, the gap between our protocol and our impossibility result is only a small constant multiplicative factor.

- We also present a game-theoretically fair protocol for \emph{any} efficiently sampleable $m$-outcome distribution in the dishonest majority setting. For instance, even for the case of $m=2$ (i.e., two-sided coin-toss), our result implies a game-theoretically fair protocol for an \emph{arbitrary} Bernoulli coin. In contrast, the work of Wu, Asharov, and Shi only focussed on a Bernoulli coin with parameter $1/2$.
Expand
Clémence Chevignard, Pierre-Alain Fouque, André Schrottenloher
ePrint Report ePrint Report
This paper focuses on the optimization of the number of logical qubits in Shor's quantum factoring algorithm. As in previous works, we target the implementation of the modular exponentiation, which is the most costly component of the algorithm, both in qubits and operations.

In this paper, we show that using only $o(n)$ work qubits, one can obtain the first bit of the modular exponentiation output. We combine this result with May and Schlieper's truncation technique (ToSC 2022) and the Ekerå-Håstad variant of Shor's algorithm (PQCrypto 2017) to obtain a quantum factoring algorithm requiring only $n/2 + o(n)$ qubits in the case of an $n$-bit RSA modulus, while current envisioned implementations require about $2n$ qubits.

Our algorithm uses a Residue Number System and succeeds with a parametrizable probability. Being completely classical, we have implemented and tested it. Among possible trade-offs, we can reach a gate count $\mathcal{O}(n^3)$ for a depth $\mathcal{O}(n^2 \log^3 n)$, which then has to be multiplied by $\mathcal{O}(\log n)$ (the number of measurement results required by Ekerå-Håstad). Preliminary logical resource estimates suggest that this circuit could be engineered to use less than 1700 qubits and $2^{36}$ Toffoli gates, and require 60 independent runs to factor an RSA-2048 instance.
Expand
Dimitris Mouris, Christopher Patton, Hannah Davis, Pratik Sarkar, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
Insight into user experience and behavior is critical to the success of large software systems and web services. Yet gaining such insights, while preserving user privacy, is a significant challenge. Recent advancements in multi-party computation have made it practical to compute verifiable aggregates over secret shared data. One important use case for these protocols is heavy hitters, where the servers compute the most popular inputs held by the users without learning the inputs themselves. The Poplar protocol (IEEE S&P 2021) focuses on this use case, but cannot support other aggregation tasks. Another such protocol, Prio (NSDI 2017), supports a wider variety of statistics but is unsuitable for heavy hitters.

We introduce Mastic, a flexible protocol for private and verifiable general-purpose statistics based on function secret sharing and zero-knowledge proofs on secret shared data. Mastic is the first to solve the more general problem of weighted heavy-hitters, enabling new use cases, not supported by Prio or Poplar. In addition, Mastic allows grouping general-purpose metrics by user attributes, such as their geographic location or browser version, without sacrificing privacy or incurring high-performance costs, which is a major improvement over Prio. We demonstrate Mastic's benefits with two real-world applications, private network error logging and browser telemetry, and compare our protocol with Prio and Poplar on a wide area network. Overall, we report over one order of magnitude performance improvement over Poplar for heavy hitters and $1.5-2\times$ improvement over Prio for attribute-based metrics.
Expand
John Preuß Mattsson
ePrint Report ePrint Report
One-way key chains or ratchets play a vital role in numerous important security protocols such as TLS 1.3, QUIC, Signal, MLS, EDHOC, and OSCORE. Despite the crucial role they play, very little is known about their security properties. This paper categorizes and examines different key chain constructions, offering a comprehensive overview of their security. Our analysis reveals notable distinctions among the number of collisions occurring within chains, between chains, and between a chain and a random set. Notably, the type of key chain used in protocols such as TLS 1.3 and Signal exhibit a significant number of weak keys, an unexpectedly high rate of key collisions surpassing birthday attack expectations, and a predictable shrinking key space susceptible to novel Time-Memory Trade-Off (TMTO) attacks with complexity $\le N^{1/3}$, which is well within the capabilities of current supercomputers and distributed systems. Consequently, the security level provided by e.g., TLS 1.3 is significantly lower than anticipated based on key sizes. To address these concerns, we analyze the aforementioned protocols and provide numerous concrete recommendations for enhancing their security, as well as guidance for future security protocol design.
Expand
Pierre Pébereau
ePrint Report ePrint Report
In this work, we study the singular locus of the varieties defined by the public keys of UOV and VOX, two multivariate quadratic signature schemes submitted to the additional NIST call for signature schemes. Singular points do not exist for generic quadratic systems, which enables us to introduce two new algebraic attacks against UOV-based schemes. We show that they can be seen as an algebraic variant of the Kipnis-Shamir attack, which can be obtained in our framework as an enumerative approach of solving a bihomogeneous modeling of the computation of singular points. This allows us to highlight some heuristics implicitly relied on by the Kipnis-Shamir attack.

We give new attacks for UOV$^{\hat +}$ and VOX targeting singular points of the public key equations. Our attacks lower the security of the schemes, both asymptotically and in number of gates, showing in particular that the parameters sets proposed for these schemes do not meet the NIST security requirements. More precisely, we show that the security of UOV$^{\hat +}$ was overestimated by factors $2^{22}, 2^{36}, 2^{59}$ for security levels $I, III, V$ respectively. We conclude the attack on VOX by showing that an attacker can perform a full key recovery from one vector obtained in the previous attacks.
Expand
Mustafa Khairallah, Srinivasan Yadhunathan, Shivam Bhasin
ePrint Report ePrint Report
In this paper, we propose a leakage-resilient pseudo-random number generator (PRNG) design that leverages the rekeying techniques of the PSV-Enc encryption scheme and the superposition property of the Superposition-Tweak-Key (STK) framework. The random seed of the PRNG is divided into two parts; one part is used as an ephemeral key that changes every two calls to a tweakable block cipher (TBC), and the other part is used as a static long-term key. Using the superposition property, we show that it is possible to eliminate observable leakage by only masking the static key. Thus, our proposal itself can be seen as a superposition of masking and rekeying. We show that our observations can be used to design an unpredictable-with-leakage PRNG as long as the static key is protected, and the ephemeral key cannot be attacked with 2 traces. Our construction enjoys better theoretical security arguments than PSV-Enc; better Time-Data trade-off and leakage assumptions, using the recently popularized unpredictability with leakage. We verify our proposal by performing Test Vector Leakage Assessment (TVLA) on an STK-based TBC (\deoxys) operated with a fixed key and a dynamic random tweak. Our results show that while the protection of the static key is non-trivial, it only requires $\approx 10\%$ overhead for first-order protection in the most conservative setting, unlike traditional masking which may require significant overheads of $300\%$ or more.
Expand
David Du Pont, Jonas Bertels, Furkan Turan, Michiel Van Beirendonck, Ingrid Verbauwhede
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) enables computation on encrypted data, holding immense potential for enhancing data privacy and security in various applications. Presently, FHE adoption is hindered by slow computation times, caused by data being encrypted into large polynomials. Optimized FHE libraries and hardware acceleration are emerging to tackle this performance bottleneck. Often, these libraries implement the Number Theoretic Transform (NTT) algorithm for efficient polynomial multiplication. Existing implementations mostly focus on the case where the polynomials are defined over a power-of-two cyclotomic ring, allowing to make use of the simpler Cooley-Tukey NTT. However, generalized cyclotomics have several benefits in the BGV FHE scheme, including more SIMD plaintext slots and a simpler bootstrapping algorithm.

We present a hardware architecture for the NTT targeting generalized cyclotomics within the context of the BGV FHE scheme. We explore different non-power-of-two NTT algorithms, including the Prime-Factor, Rader, and Bluestein NTTs. Our most efficient architecture targets the 21845-th cyclotomic polynomial --- a practical parameter for BGV --- with ideal properties for use with a combination of the Prime-Factor and Rader algorithms. The design achieves high throughput with optimized resource utilization, by leveraging parallel processing, pipelining, and reusing processing elements. Compared to Wu et al.'s VLSI architecture of the Bluestein NTT, our approach showcases 2$\times$ to 5$\times$ improved throughput and area efficiency. Simulation and implementation results on an AMD Alveo U250 FPGA demonstrate the feasibility of the proposed hardware design for FHE.
Expand
Pedro Branco, Nico Döttling, Akshayaram Srinivasan, Riccardo Zanotto
ePrint Report ePrint Report
Somewhere statistically binding (SSB) hashing allows us to sample a special hashing key such that the digest statistically binds the input at $m$ secret locations. This hash function is said to be somewhere extractable (SE) if there is an additional trapdoor that allows the extraction of the input bits at the $m$ locations from the digest.

Devadas, Goyal, Kalai, and Vaikuntanathan (FOCS 2022) introduced a variant of somewhere extractable hashing called rate-1 fully local SE hash functions. The rate-1 requirement states that the size of the digest is $m + \mathsf{poly}(\lambda)$ (where $\lambda$ is the security parameter). The fully local property requires that for any index $i$, there is a "very short" opening showing that $i$-th bit of the hashed input is equal to $b$ for some $b \in \{0,1\}$. The size of this opening is required to be independent of $m$ and in particular, this means that its size is independent of the size of the digest. Devadas et al. gave such a construction from Learning with Errors (LWE).

In this work, we give a construction of a rate-1 fully local somewhere extractable hash function from Decisional Diffie-Hellman (DDH) and BARGs. Under the same assumptions, we give constructions of rate-1 BARG and RAM SNARG with partial input soundness whose proof sizes are only matched by prior constructions based on LWE.
Expand
Alexander Bienstock, Sarvar Patel, Joon Young Seo, Kevin Yeo
ePrint Report ePrint Report
In this paper, we study two problems: oblivious compression and decompression of ciphertexts. In oblivious compression, a server holds a set of ciphertexts with a subset of encryptions of zeroes whose positions are only known to the client. The goal is for the server to effectively compress the ciphertexts obliviously, while preserving the non-zero plaintexts and without learning the plaintext values. For oblivious decompression, the client, instead, succinctly encodes a sequence of plaintexts such that the server may decode encryptions of all plaintexts value, but the zeroes may be replaced with arbitrary values. We present solutions to both problems that construct lossless compressions only 5% more than the optimal minimum using only additive homomorphism. The crux of both algorithms involve embedding ciphertexts as random linear systems that are efficiently solvable.

Using our compression schemes, we obtain state-of-the-art schemes for batch private information retrieval (PIR) where a client wishes to privately retrieve multiple entries from a server-held database in one query. We show that our compression schemes may be used to reduce communication by up to 30% for batch PIR in both the single- and two-server settings.

Additionally, we study labeled private set intersection (PSI) in the unbalanced setting where one party's set is significantly smaller than the other party's set and each entry has associated data. By utilizing our novel compression algorithm, we present a protocol with 65-88% reduction in communication with comparable computation compared to prior works.
Expand
Michele Battagliola, Andrea Flamini
ePrint Report ePrint Report
The recent surge of distribute technologies caused an increasing interest towards threshold signature protocols, that peaked with the recent NIST First Call for Multi-Party Threshold Schemes.

Since its introduction, the Fiat-Shamir Transform has been the most popular way to design standard digital signature schemes. In this work, we translate the Fiat-Shamir Transform into a multi-party setting, building a framework that seeks to be an alternative, easier, way to design threshold digital signatures. We do that by introducing the concept of threshold identification scheme and threshold sigma protocol, and showing necessary and sufficient conditions to prove the security of the threshold signature schemes derived from them.

Lastly, we show a practical application of our framework providing an alternative security proof for Sparkle, a recent threshold Schnorr signature. In particular, we consider the threshold identification scheme underlying Sparkle and prove the security of the signature derived from it.

We show that using our framework the effort required to prove the security of threshold signatures might be drastically lowered. In fact, instead of reducing explicitly its security to the security of a hard problem, it is enough to prove some properties of the underlying threshold sigma protocol and threshold identification scheme. Then, by applying the results that we prove in this paper it is guaranteed that the derived threshold signature is secure.
Expand
Charlotte Lefevre
ePrint Report ePrint Report
This note examines a nuance in the methods employed for counting the adversarial online complexity in the security proofs of duplex-based modes, with a focus on authenticated encryption. A recent study by Gilbert et al., reveals an attack on a broad class of duplex-based authenticated encryption modes. In particular, their approach to quantifying the adversarial online complexity, which capture realistic attack scenarios, includes certain queries in the count which are not in the security proofs. This note analyzes these differences and concludes that the attack of Gilbert et al, for certain parameter choices, matches the security bound.
Expand
◄ Previous Next ►