IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
13 September 2023
Fuchun Lin, Chaoping Xing, Yizhou Yao, Chen Yuan
ePrint Report
Reversed multiplication friendly embedding (RMFE) amortization has been playing an active role in the state-of-the-art constructions of MPC protocols over rings (in particular, the ring $\mathbb{Z}_{2^k}$). As far as we know, this powerful technique has NOT been able to find applications in the crown jewel of two-party computation, the non-interactive secure computation (NISC), where the requirement of the protocol being non-interactive constitutes a formidable technical bottle-neck. We initiate such a study focusing on statistical NISC protocols in the VOLE-hybrid model. Our study begins with making the {\em decomposable affine randomized encoding (DARE)} based semi-honest NISC protocol compatible with RMFE techniques, which together with known techniques for forcing a malicious sender Sam to honestly follow DARE already yield a secure amortized protocol, assuming both parties follow RMFE encoding. Achieving statistical security in the full malicious setting is much more challenging, as applying known techniques for enforcing compliance with RMFE incurs interaction. To solve this problem, we put forward a new notion dubbed non-malleable RMFE (NM-RMFE), which is a randomized RMFE such that, once one party deviates from the encoding specification, the randomness injected by the other party will randomize the output, preventing information from being leaked. NM-RMFE simultaneously forces both parties to follow RMFE encoding, offering a desired {\em non-interactive} solution to amortizing NISC. We believe that NM-RMFE is on its own an important primitive that has applications in secure computation and beyond, interactive and non-interactive alike. With an asymptotically good instantiation of our NM-RMFE, we obtain the first {\em statistical} reusable NISC protocols in the VOLE-hybrid model with {\em constant communication overhead} for arithmetic branching programs over $\mathbb{Z}_{2^k}$.
As side contributions, we consider computational security and present two concretely efficient NISC constructions in the random oracle model from conventional RMFEs.
David Fifield
ePrint Report
This article presents three retrospective case studies of cryptography-related flaws in censorship circumvention protocols: a decryption oracle in Shadowsocks “stream cipher” methods, non-uniform Elligator public key representatives in obfs4, and a replay-based active distinguishing attack exploiting malleability in VMess. These three protocols come from the family of “fully encrypted” circumvention protocols: their traffic in both directions is indistinguishable from a uniformly random stream of bytes (or at least, is supposed to be). Some of the flaws are fixable implementation errors; others are rooted in more fundamental design errors. Their consequences range from enabling passive probabilistic detection to complete loss of confidentiality. All have been fixed, mitigated, or superseded since their discovery.
My primary purpose is to provide an introduction of circumvention threat models to specialists in cryptography, and to make the point that while cryptography is a necessary tool in circumvention, it is not the sole or even most important consideration. Secondarily, I want to furnish a few instructive examples of cryptographic design and implementation errors in uncontrived, deployed protocols. While the flaws I discuss affected systems of significant social importance with millions of collective users, they are not well-known outside a small circle of specialists in circumvention.
My primary purpose is to provide an introduction of circumvention threat models to specialists in cryptography, and to make the point that while cryptography is a necessary tool in circumvention, it is not the sole or even most important consideration. Secondarily, I want to furnish a few instructive examples of cryptographic design and implementation errors in uncontrived, deployed protocols. While the flaws I discuss affected systems of significant social importance with millions of collective users, they are not well-known outside a small circle of specialists in circumvention.
Amit Singh Bhati, Erik Pohle, Aysajan Abidin, Elena Andreeva, Bart Preneel
ePrint Report
IoT devices collect privacy-sensitive data, e.g., in smart grids or in medical devices, and send this data to cloud servers for further processing. In order to ensure confidentiality as well as authenticity of the sensor data in the untrusted cloud environment, we consider a transciphering scenario between embedded IoT devices and multiple cloud servers that perform secure multi-party computation (MPC). Concretely, the IoT devices encrypt their data with a lightweight symmetric cipher and send the ciphertext to the cloud servers. To obtain the secret shares of the cleartext message for further processing, the cloud servers engage in an MPC protocol to decrypt the ciphertext in a distributed manner. This way, the plaintext is never exposed to the individual servers.
As an important building block in this scenario, we propose a new, provably secure family of lightweight modes for authenticated encryption with associated data (AEAD), called Eevee. The Eevee family has fully parallel decryption, making it suitable for MPC protocols for which the round complexity depends on the complexity of the function they compute. Further, our modes use the lightweight forkcipher primitive that offers fixed-length output expansion and a compact yet parallelizable internal structure.
All Eevee members improve substantially over the few available state-of-the-art (SotA) MPC-friendly modes and other standard solutions. We benchmark the Eevee family on a microcontroller and in MPC. Our proposed mode Jolteon (when instantiated with ForkSkinny) provides 1.85x to 3.64x speedup in IoT-encryption time and 3x to 4.5x speedup in both MPC-decryption time and data for very short queries of 8 bytes and, 1.55x to 3.04x and 1.23x to 2.43x speedup, respectively, in MPC-decryption time and data for queries up to 500 bytes when compared against SotA MPC-friendly modes instantiated with SKINNY. We also provide two advanced modes, Umbreon and Espeon, that show a favorable performance-security trade-off with stronger security guarantees such as nonce-misuse security. Additionally, all Eevee members have full $n$-bit security (where $n$ is the block size of the underlying primitive), use a single primitive and require smaller state and HW area when compared with the SotA modes under their original security settings.
As an important building block in this scenario, we propose a new, provably secure family of lightweight modes for authenticated encryption with associated data (AEAD), called Eevee. The Eevee family has fully parallel decryption, making it suitable for MPC protocols for which the round complexity depends on the complexity of the function they compute. Further, our modes use the lightweight forkcipher primitive that offers fixed-length output expansion and a compact yet parallelizable internal structure.
All Eevee members improve substantially over the few available state-of-the-art (SotA) MPC-friendly modes and other standard solutions. We benchmark the Eevee family on a microcontroller and in MPC. Our proposed mode Jolteon (when instantiated with ForkSkinny) provides 1.85x to 3.64x speedup in IoT-encryption time and 3x to 4.5x speedup in both MPC-decryption time and data for very short queries of 8 bytes and, 1.55x to 3.04x and 1.23x to 2.43x speedup, respectively, in MPC-decryption time and data for queries up to 500 bytes when compared against SotA MPC-friendly modes instantiated with SKINNY. We also provide two advanced modes, Umbreon and Espeon, that show a favorable performance-security trade-off with stronger security guarantees such as nonce-misuse security. Additionally, all Eevee members have full $n$-bit security (where $n$ is the block size of the underlying primitive), use a single primitive and require smaller state and HW area when compared with the SotA modes under their original security settings.
Gijs van Dam
ePrint Report
Bitcoin has a low throughput of around 7 transactions per second. The Lightning Network (LN) is a solution meant to improve that throughput while also improving privacy. LN is a Payment Channel Network (PCN) that runs as a peer-to-peer network on top of Bitcoin and improves scalability by keeping most transactions off-chain without sacrificing the trustless character of Bitcoin. Prior work showed that LN is susceptible to the Balance Discovery Attack that allows for individual channel balances to be revealed, threatening users' privacy.
In this work we introduce Payment Splitting and Switching (PSS), a way of splitting up payments in LN at intermediary hops along the payment path. PSS drastically reduces the information an attacker can obtain through a BDA. Using real-world data in an LN simulator we demonstrate that the information gain for the attacker drops up to 62% when PSS is deployed. Apart from its potential as mitigation against BDA, PSS also shows promise for increased LN throughput and as a mitigation against jamming attacks.
Qingliang Hou, Xiaoyang Dong, Lingyue Qin, Guoyan Zhang, Xiaoyun Wang
ePrint Report
Feistel network and its generalizations (GFN) are another important building blocks for constructing hash functions, e.g., Simpira v2, Areion, and the ISO standard Lesamnta-LW. The Meet-in-the-Middle (MitM) is a general paradigm to build preimage and collision attacks on hash functions, which has been automated in several papers. However, those automatic tools mostly focus on the hash function with Substitution-Permutation network (SPN) as building blocks, and only one for Feistel network by Schrottenloher and Stevens (at CRYPTO 2022).
In this paper, we introduce a new automatic model for MitM attacks on Feistel networks by generalizing the traditional direct or indirect partial matching strategies and also Sasaki’s multi-round matching strategy. Besides, we find the equivalent transformations of Feistel and GFN can significantly simplify the MILP model. Based on our automatic model, we improve the preimage attacks on Feistel-SP-MMO, Simpira-2/-4-DM, Areion-256/-512-DM by 1-2 rounds or significantly reduce the complexities. Furthermore, we fill in the gap left by Schrottenloher and Stevens at CRYPTO 2022 on the large branch (b > 4) Simpira-b’s attack and propose the first 11-round attack on Simpira-6. Besides, we significantly improve the collision attack on the ISO standard hash Lesamnta-LW by increasing the attacked round number from previous 11 to ours 17 rounds.
Weijie Wang, Yujie Lu, Charalampos Papamanthou, Fan Zhang
ePrint Report
Motivated by the extended deployment of authenticated data structures (e.g., Merkle Patricia Tries) for verifying massive amounts of data in blockchain systems, we begin a systematic study of the I/O efficiency of such systems. We first explore the fundamental limitations of memory checking, a previously-proposed abstraction for verifiable storage, in terms of its locality---a complexity measure that we introduce for the first time and is defined as the number of non-contiguous memory regions a checker must query to verifiably answer a read or a write query. Our central result is an $\Omega(\log n /\log \log n)$ lower bound for the locality of any memory checker. Then we turn our attention to (dense and sparse) Merkle trees, one of the most celebrated memory checkers, and provide stronger lower bounds for their locality. For example, we show that any dense Merkle tree layout will have average locality at least $\frac 13 \log n$. Furthermore, if we allow node duplication, we show that if any write operation has at most polylog complexity, then the read locality cannot be less than $\log n/\log \log n$. Our lower bounds help us construct two new locality-optimized authenticated data structures (DupTree and PrefixTree) which we implement and evaluate on random operations and real workloads, and which are shown to outperform traditional Merkle trees, especially as the number of leaves increases.
Koustabh Ghosh, Parisa Amiri Eliasi, Joan Daemen
ePrint Report
In this paper we introduce a new keyed hash function based on 32-bit integer multiplication that we call Multimixer-128. In our approach, we follow the key-then-hash parallel paradigm. So, we first add a variable length input message to a secret key and split the result into blocks. A fixed length public function based on integer multiplication is then applied on each block and their results are added to form the digest. We prove an upper bound of $2^{-127}$ for the universality of Multimixer-128 by means of the differential probability and image probability of the underlying public function.
There are vector instructions for fast 32-bit integer multiplication on many CPUs and in such platforms, Multimixer-128 is very efficient. We compare our implementation of Multimixer-128 with NH hash function family that offers similar levels of security and with two fastest NIST LWC candidates. To the best of our knowledge, NH hash function is the fastest keyed hash function on software and Multimixer-128 outperforms NH while providing same levels of security.
There are vector instructions for fast 32-bit integer multiplication on many CPUs and in such platforms, Multimixer-128 is very efficient. We compare our implementation of Multimixer-128 with NH hash function family that offers similar levels of security and with two fastest NIST LWC candidates. To the best of our knowledge, NH hash function is the fastest keyed hash function on software and Multimixer-128 outperforms NH while providing same levels of security.
George Teseleanu, Paul Cotan
ePrint Report
Let $N=pq$ be the product of two balanced prime numbers $p$ and $q$. Elkamchouchi, Elshenawy and Shaban presented in 2002 an interesting RSA-like cryptosystem that uses the key equation $ed - k (p^2-1)(q^2-1) = 1$, instead of the classical RSA key equation $ed - k (p-1)(q-1) = 1$. The authors claimed that their scheme is more secure than RSA. Unfortunately, the common attacks developed against RSA can be adapted for Elkamchouchi \emph{et al.}'s scheme. In this paper, we introduce a family of RSA-like encryption schemes that uses the key equation $ed - k (p^n-1)(q^n-1) = 1$, where $n>0$ is an integer. Then, we show that regardless of the choice of $n$, there exists an attack based on continued fractions that recovers the secret exponent.
Liqing Yu, Yusai Wu, Yu Yu, Zhenfu Cao, Xiaolei Dong
ePrint Report
This work studies the key-alternating ciphers (KACs) whose round permutations are not necessarily independent. We revisit existing security proofs for key-alternating ciphers with a single permutation (KACSPs), and extend their method to an arbitrary number of rounds. In particular, we propose new techniques that can significantly simplify the proofs, and also remove two unnatural restrictions in the known security bound of 3-round KACSP (Wu et al., Asiacrypt 2020). With these techniques, we prove the first tight security bound for t-round KACSP, which was an open problem. We stress that our techniques apply to all variants of KACs with non-independent round permutations, as well as to the standard KACs.
11 September 2023
Adi Akavia, Ben Galili, Hayim Shaul, Mor Weiss, Zohar Yakhini
ePrint Report
Privacy-Preserving Machine Learning (PPML) provides protocols for learning and statistical analysis of data that may be distributed amongst multiple data owners (e.g., hospitals that own proprietary healthcare data), while preserving data privacy. The PPML literature includes protocols for various learning methods, including ridge regression. Ridge regression controls the $L_2$ norm of the model, but does not aim to strictly reduce the number of non-zero coefficients, namely the $L_0$ norm of the model. Reducing the number of non-zero coefficients (a form of feature selection) is important for avoiding overfitting, and for reducing the cost of using learnt models in practice.
In this work, we develop a first privacy-preserving protocol for sparse linear regression under $L_0$ constraints. The protocol addresses data contributed by several data owners (e.g., hospitals). Our protocol outsources the bulk of the computation to two non-colluding servers, using homomorphic encryption as a central tool. We provide a rigorous security proof for our protocol, where security is against semi-honest adversaries controlling any number of data owners and at most one server. We implemented our protocol, and evaluated performance with nearly a million samples and up to 40 features.
Huiqin Chen, Yongqiang Li, Xichao Hu, Zhengbin Liu, Lin Jiao, Mingsheng Wang
ePrint Report
The design and analysis of dedicated tweakable block ciphers constitute a dynamic and relatively recent research field in symmetric cryptanalysis. The assessment of security in the related-tweakey model is of utmost importance owing to the existence of a public tweak. This paper proposes an automatic search model for identifying related-tweakey impossible differentials based on the propagation of states under specific constraints, which is inspired by the research of Hu et al. in ASIACRYPT 2020. Our model is universally applicable to block ciphers, but its search efficiency may be limited in some cases. To address this issue, we introduce the Locality Constraint Analysis (LCA) technique to impossible differential cryptanalysis and propose a generalized automatic search model. Technically, we transform our models into Satisfiability Modulo Theories (SMT) problems and solve them using the STP solver. We have applied our tools to several tweakable block ciphers, such as Joltik-BC, SKINNY, QARMA, and CRAFT, to evaluate their effectiveness and practicality. Specifically, we have discovered 7-round related-tweakey impossible differentials for Joltik-BC-192, and 12-round related-tweak impossible differentials, as well as 15-round related-tweakey impossible differentials for CRAFT for the first time. Based on the search results, we demonstrate that the LCA technique can be effectively performed when searching and determining the contradictory positions for the distinguisher with long trails or ciphers with large sizes in impossible differential cryptanalysis.
Emanuele Bellini, Juan Grados, Mohamed Rachidi, Nitin Satpute, Joan Daemen, Solane Elhirch
ePrint Report
In this work, we tackle the problem of estimating the security of iterated symmetric ciphers in an efficient manner, with tests that do not require a deep analysis of the internal structure of the cipher. This is particularly useful during the design phase of these ciphers, especially for quickly testing several combinations of possible parameters defining several cipher design variants.
We consider a popular statistical test that allows us to determine the probability of flipping each cipher output bit, given a small variation in the input of the cipher. From these probabilities, one can compute three measurable metrics related to the well-known full diffusion, avalanche and strict avalanche criteria.
This highly parallelizable testing process scales linearly with the number of samples, i.e., cipher inputs, to be evaluated and the number of design variants to be tested. But, the number of design variants might grow exponentially with respect to some parameters. The high cost of CPUs, makes them a bad candidate for this kind of parallelization. As a main contribution, we propose a framework, ACE-HoT, to parallelize the testing process using multi-GPU. Our implementation does not perform any intermediate CPU-GPU data transfers.
The diffusion and avalanche criteria can be seen as an application of discrete first-order derivatives. As a secondary contribution, we generalize these criteria to their high-order version. Our generalization requires an exponentially larger number of samples, in order to compute sufficiently accurate probabilities.
As a case study, we apply ACE-HoT on most of the finalists of the NIST lightweight standardization process, with a special focus on the winner ASCON.
Khoa Nguyen, Partha Sarathi Roy, Willy Susilo, Yanhong Xu
ePrint Report
This paper introduces Bicameral and Auditably Private Signatures (BAPS) -- a new privacy-preserving signature system with several novel features. In a BAPS system, given a certified attribute $\mathbf{x}$ and a certified policy $P$, a signer can issue a publicly verifiable signature $\Sigma$ on a message $m$ as long as $(m, \mathbf{x})$ satisfies $P$. A noteworthy characteristic of BAPS is that both attribute $\mathbf{x}$ and policy $P$ are kept hidden from the verifier, yet the latter is convinced that
these objects were certified by an attribute-issuing authority and a policy-issuing authority, respectively. By considering bicameral certification authorities and requiring privacy for both attributes and policies, BAPS generalizes the spirit of existing advanced signature primitives with fine-grained controls on signing capabilities (e.g., attribute-based signatures, predicate signatures, policy-based signatures). Furthermore, BAPS provides an appealing feature named auditable privacy, allowing the signer of $\Sigma$ to verifiably disclose various pieces of partial information about $P$ and $\mathbf{x}$ when asked by auditor(s)/court(s) at later times. Auditable privacy is intrinsically different from and can be complementary to the notion of accountable privacy traditionally incorporated in traceable anonymous systems such as group signatures. Equipped with these distinguished features, BAPS can potentially address interesting application scenarios for which existing primitives do not offer a direct solution.
We provide rigorous security definitions for BAPS, following a ``sim-ext'' approach. We then demonstrate a generic construction based on commonly used cryptographic building blocks, which employs a sign-then-commit-then-prove design. Finally, we present a concrete instantiation of BAPS, that is proven secure in the random oracle model under lattice assumptions. The scheme can handle arbitrary policies represented by polynomial-size Boolean circuits and can address quadratic disclosing functions. In the construction process, we develop a new technical building block that could be of independent interest: a zero-knowledge argument system allowing to prove the satisfiability of a certified-and-hidden Boolean circuit on certified-and-committed inputs.
We provide rigorous security definitions for BAPS, following a ``sim-ext'' approach. We then demonstrate a generic construction based on commonly used cryptographic building blocks, which employs a sign-then-commit-then-prove design. Finally, we present a concrete instantiation of BAPS, that is proven secure in the random oracle model under lattice assumptions. The scheme can handle arbitrary policies represented by polynomial-size Boolean circuits and can address quadratic disclosing functions. In the construction process, we develop a new technical building block that could be of independent interest: a zero-knowledge argument system allowing to prove the satisfiability of a certified-and-hidden Boolean circuit on certified-and-committed inputs.
Atsuki Momose, Sourav Das, Ling Ren
ePrint Report
The constant-sized polynomial commitment scheme by Kate, Zaverucha, and Goldberg (Asiscrypt 2010), also known as the KZG commitment, is an essential component in designing bandwidth-efficient verifiable secret-sharing (VSS) protocols. We point out, however, that the KZG commitment is missing two important properties that are crucial for VSS protocols.
First, the KZG commitment has not been proven to be degree binding in the standard adversary model without idealized group assumptions. In other words, the committed polynomial is not guaranteed to have the claimed degree, which is supposed to be the reconstruction threshold of VSS. Without this property, shareholders in VSS may end up reconstructing different secrets depending on which shares are used.
Second, the KZG commitment does not support polynomials with different degrees at once with a single setup. If the reconstruction threshold of the underlying VSS protocol changes, the protocol must redo the setup, which involves an expensive multi-party computation known as the powers of tau setup.
In this work, we augment the KZG commitment to address both of these limitations. Our scheme is degree-binding in the standard model under the strong Diffie-Hellman (SDH) assumption. It supports any degree $0 < d \le m$ under a powers-of-tau common reference string with $m+1$ group elements generated by a one-time setup.
First, the KZG commitment has not been proven to be degree binding in the standard adversary model without idealized group assumptions. In other words, the committed polynomial is not guaranteed to have the claimed degree, which is supposed to be the reconstruction threshold of VSS. Without this property, shareholders in VSS may end up reconstructing different secrets depending on which shares are used.
Second, the KZG commitment does not support polynomials with different degrees at once with a single setup. If the reconstruction threshold of the underlying VSS protocol changes, the protocol must redo the setup, which involves an expensive multi-party computation known as the powers of tau setup.
In this work, we augment the KZG commitment to address both of these limitations. Our scheme is degree-binding in the standard model under the strong Diffie-Hellman (SDH) assumption. It supports any degree $0 < d \le m$ under a powers-of-tau common reference string with $m+1$ group elements generated by a one-time setup.
Mi-Ying (Miryam) Huang, Xinyu Mao, Guangxu Yang, Jiapeng Zhang
ePrint Report
Constructing key-agreement protocols in the random oracle model (ROM) is a viable method to assess the feasibility of developing public-key cryptography within Minicrypt. Unfortunately, as shown by Impagliazzo and Rudich (STOC 1989) and Barak and Mahmoody (Crypto 2009), such protocols can only guarantee limited security: any $\ell$-query protocol can be attacked by an $O(\ell^2)$-query adversary. This quadratic gap matches the key-agreement protocol proposed by Merkle (CACM 78), known as Merkle's Puzzles.
Besides query complexity, the communication complexity of key-agreement protocols in the ROM is also an interesting question in the realm of find-grained cryptography, even though only limited security is achievable. Haitner et al. (ITCS 2019) first observed that in Merkle's Puzzles, to obtain secrecy against an eavesdropper with $O(\ell^2)$ queries, the honest parties must exchange $\Omega(\ell)$ bits. Therefore, they conjectured that high communication complexity is unavoidable, i.e., any $\ell$-query protocols with $c$ bits of communication could be attacked by an $O(c\cdot \ell)$-query adversary. This, if true, will suggest that Merkle's Puzzle is also optimal regarding communication complexity. Building upon techniques from communication complexity, Haitner et al. (ITCS 2019) confirmed this conjecture for two types of key agreement protocols with certain natural properties.
This work affirms the above conjecture for all non-adaptive protocols with perfect completeness. Our proof uses a novel idea called density increment argument. This method could be of independent interest as it differs from previous communication lower bounds techniques (and bypasses some technical barriers).
Besides query complexity, the communication complexity of key-agreement protocols in the ROM is also an interesting question in the realm of find-grained cryptography, even though only limited security is achievable. Haitner et al. (ITCS 2019) first observed that in Merkle's Puzzles, to obtain secrecy against an eavesdropper with $O(\ell^2)$ queries, the honest parties must exchange $\Omega(\ell)$ bits. Therefore, they conjectured that high communication complexity is unavoidable, i.e., any $\ell$-query protocols with $c$ bits of communication could be attacked by an $O(c\cdot \ell)$-query adversary. This, if true, will suggest that Merkle's Puzzle is also optimal regarding communication complexity. Building upon techniques from communication complexity, Haitner et al. (ITCS 2019) confirmed this conjecture for two types of key agreement protocols with certain natural properties.
This work affirms the above conjecture for all non-adaptive protocols with perfect completeness. Our proof uses a novel idea called density increment argument. This method could be of independent interest as it differs from previous communication lower bounds techniques (and bypasses some technical barriers).
Renas Bacho, Julian Loss
ePrint Report
Publicly Verifiable Secret Sharing (PVSS) is a fundamental primitive that allows to share a secret $S$ among $n$ parties via a publicly verifiable transcript $T$. Existing (efficient) PVSS are only proven secure against static adversaries who must choose who to corrupt ahead of a protocol execution. As a result, any protocol (e.g., a distributed randomness beacon) that builds on top of such a PVSS scheme inherits this limitation. To overcome this barrier, we revisit the security of PVSS under adaptive corruptions and show that, surprisingly, many protocols from the literature already achieve it in a meaningful way:
- We propose a new security definition for aggregatable PVSS, i.e., schemes that allow to homomorphically combine multiple transcripts into one compact aggregate transcript $AT$ that shares the sum of their individual secrets. Our notion captures that if the secret shared by $AT$ contains at least one contribution from an honestly generated transcript, it should not be predictable. We then prove that several existing schemes satisfy this notion against adaptive corruptions in the algebraic group model.
- To motivate our new notion, we show that it implies the adaptive security of two recent random beacon protocols, SPURT (S&P '22) and OptRand (NDSS '23), who build on top of aggregatable PVSS schemes satisfying our notion of unpredictability. For a security parameter $\lambda$, our result improves the communication complexity of the best known adaptively secure random beacon protocols to $O(\lambda n^2)$ for synchronous networks with $t
- We propose a new security definition for aggregatable PVSS, i.e., schemes that allow to homomorphically combine multiple transcripts into one compact aggregate transcript $AT$ that shares the sum of their individual secrets. Our notion captures that if the secret shared by $AT$ contains at least one contribution from an honestly generated transcript, it should not be predictable. We then prove that several existing schemes satisfy this notion against adaptive corruptions in the algebraic group model.
- To motivate our new notion, we show that it implies the adaptive security of two recent random beacon protocols, SPURT (S&P '22) and OptRand (NDSS '23), who build on top of aggregatable PVSS schemes satisfying our notion of unpredictability. For a security parameter $\lambda$, our result improves the communication complexity of the best known adaptively secure random beacon protocols to $O(\lambda n^2)$ for synchronous networks with $t
Aydin Abadi, Steven J. Murdoch
ePrint Report
Repeated modular squaring plays a crucial role in various time-based cryptographic primitives, such as Time-Lock Puzzles and Verifiable Delay Functions. At ACM CCS 2021, Thyagarajan et al. introduced “OpenSquare”, a decentralised protocol that lets a client delegate the computation of repeated modular squaring to third-party servers while ensuring that these servers are compensated only if they deliver valid results. In this work, we unveil a significant vulnerability in OpenSquare, which enables servers to receive payments without fulfilling the delegated task. To tackle this issue, we present a series of mitigation measures.
Christophe Hauser, Shirin Nilizadeh, Yan Shoshitaishvili, Ni Trieu, Srivatsan Ravi, Christopher Kruegel, Giovanni Vigna
ePrint Report
Over the last decade, online reputation has become a central aspect of our digital lives. Most online services and communities assign a reputation score to users, based on feedback from other users about various criteria such as how reliable, helpful, or knowledgeable a person is. While many online services compute reputation based on the same set of such criteria, users currently do not have the ability to use their reputation scores across services. As a result, users face trouble establishing themselves on new services or trusting each other on services that do not support reputation tracking. Existing systems that aggregate reputation scores, unfortunately, provide no guarantee in terms of user privacy, and their use makes user accounts linkable. Such a lack of privacy may result in embarrassment, or worse, place users in danger.
In this paper, we present StreetRep, a practical system for aggregating user reputation scores in a privacy-preserving manner. StreetRep makes it possible for users to provide their aggregated scores over multiple services without revealing their respective identities on each service. We discuss our novel approach for tamper-proof privacy preserving score aggregation from multiple sources by combining existing techniques such as blind signatures, homomorphic signatures and private information retrieval. We discuss its practicality and resiliency against different types of attacks. We also built a prototype implementation of StreetRep. Our evaluation demonstrates that StreetRep (a) performs efficiently and (b) practically scales to a large user base.
In this paper, we present StreetRep, a practical system for aggregating user reputation scores in a privacy-preserving manner. StreetRep makes it possible for users to provide their aggregated scores over multiple services without revealing their respective identities on each service. We discuss our novel approach for tamper-proof privacy preserving score aggregation from multiple sources by combining existing techniques such as blind signatures, homomorphic signatures and private information retrieval. We discuss its practicality and resiliency against different types of attacks. We also built a prototype implementation of StreetRep. Our evaluation demonstrates that StreetRep (a) performs efficiently and (b) practically scales to a large user base.
Sanjam Garg, Aarushi Goel, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Guru-Vamsi Policharla, Mingyuan Wang
ePrint Report
How can a model owner prove they trained their model according to the correct specification? More importantly, how can they do so while preserving the privacy of the underlying dataset and the final model? We study this problem and formulate the notion of zero-knowledge proof of training (zkPoT), which formalizes rigorous security guarantees that should be achieved by a privacy-preserving proof of training.
While it is theoretically possible to design zkPoT for any model using generic zero-knowledge proof systems, this approach results in extremely unpractical proof generation times. Towards designing a practical solution, we propose the idea of combining techniques from MPC-in-the-head and zkSNARKs literature to strike an appropriate trade-off between proof size and proof computation time. We instantiate this idea and propose a concretely efficient, novel zkPoT protocol for logistic regression.
Crucially, our protocol is streaming-friendly and does not require RAM proportional to the size of the circuit being trained and, hence, can be adapted to the requirements of available hardware. We expect the techniques developed in this paper to also generally be useful for designing efficient zkPoT protocols for other relatively more sophisticated ML models.
We implemented and benchmarked prover/verifier runtimes and proof sizes for training a logistic regression model using mini-batch gradient descent on a 4~GB dataset of 262,144 records with 1024 features. We divide our protocol into three phases: (1) data-independent offline phase (2) data-dependent phase that is independent of the model (3) online phase that depends both on the data and the model. The total proof size (across all three phases) is less than $10\%$ of the data set size ($<350$~MB). In the online phase, the prover and verifier times are under 10 minutes and half a minute respectively, whereas in the data-dependent phase, they are close to one hour and a few seconds respectively.
Crucially, our protocol is streaming-friendly and does not require RAM proportional to the size of the circuit being trained and, hence, can be adapted to the requirements of available hardware. We expect the techniques developed in this paper to also generally be useful for designing efficient zkPoT protocols for other relatively more sophisticated ML models.
We implemented and benchmarked prover/verifier runtimes and proof sizes for training a logistic regression model using mini-batch gradient descent on a 4~GB dataset of 262,144 records with 1024 features. We divide our protocol into three phases: (1) data-independent offline phase (2) data-dependent phase that is independent of the model (3) online phase that depends both on the data and the model. The total proof size (across all three phases) is less than $10\%$ of the data set size ($<350$~MB). In the online phase, the prover and verifier times are under 10 minutes and half a minute respectively, whereas in the data-dependent phase, they are close to one hour and a few seconds respectively.
Fabrice Benhamouda, Erica Blum, Jonathan Katz, Derek Leung, Julian Loss, Tal Rabin
ePrint Report
The Algorand consensus protocol is interesting both in theory and in practice. On the theoretical side, to achieve adaptive security, it introduces the novel idea of player replaceability, where each step of the protocol is executed by a different randomly selected committee whose members remain secret until they send their first and only message. The protocol provides consistency under arbitrary network conditions and liveness under intermittent network partitions. On the practical side, the protocol is used to secure the Algorand cryptocurrency, whose total value is approximately $850M at the time of writing.
The Algorand protocol in use differs substantially from the protocols described in the published literature on Algorand. Despite its significance, it lacks a formal analysis. In this work, we describe and analyze the Algorand consensus protocol as deployed today in Algorand’s ecosystem. We show that the overall protocol framework is sound by characterizing network conditions and parameter settings under which the protocol can be proven secure.
The Algorand protocol in use differs substantially from the protocols described in the published literature on Algorand. Despite its significance, it lacks a formal analysis. In this work, we describe and analyze the Algorand consensus protocol as deployed today in Algorand’s ecosystem. We show that the overall protocol framework is sound by characterizing network conditions and parameter settings under which the protocol can be proven secure.