IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
03 July 2023
Collin Zhang, Zachary DeStefano, Arasu Arun, Joseph Bonneau, Paul Grubbs, Michael Walfish
Zero-knowledge middleboxes (ZKMBs) are a recent paradigm in which clients get privacy while middleboxes enforce policy: clients prove in zero knowledge that the plaintext underlying their encrypted traffic complies with network policies, such as DNS filtering. However, prior work had impractically poor performance and was limited in functionality.
This work presents Zombie, the first system built using the ZKMB paradigm. Zombie introduces techniques that push ZKMBs to the verge of practicality: preprocessing (to move the bulk of proof generation to idle times between requests), asynchrony (to remove proving and verifying costs from the critical path), and batching (to amortize some of the verification work). Zombie’s choices, together with these techniques, provide a factor of 3.5$\times$ speedup in total computation done by client and middlebox, lowering the critical path overhead for a DNS filtering application to less than 300ms (on commodity hardware) or (in the asynchronous configuration) to 0.
As an additional contribution that is likely of independent interest, Zombie introduces a portfolio of techniques to efficiently encode regular expressions in probabilistic (and zero knowledge) proofs; these techniques offer significant asymptotic and constant factor improvements in performance over a standard baseline. Zombie builds on this portfolio to support policies based on regular expressions, such as data loss prevention.
This work presents Zombie, the first system built using the ZKMB paradigm. Zombie introduces techniques that push ZKMBs to the verge of practicality: preprocessing (to move the bulk of proof generation to idle times between requests), asynchrony (to remove proving and verifying costs from the critical path), and batching (to amortize some of the verification work). Zombie’s choices, together with these techniques, provide a factor of 3.5$\times$ speedup in total computation done by client and middlebox, lowering the critical path overhead for a DNS filtering application to less than 300ms (on commodity hardware) or (in the asynchronous configuration) to 0.
As an additional contribution that is likely of independent interest, Zombie introduces a portfolio of techniques to efficiently encode regular expressions in probabilistic (and zero knowledge) proofs; these techniques offer significant asymptotic and constant factor improvements in performance over a standard baseline. Zombie builds on this portfolio to support policies based on regular expressions, such as data loss prevention.
Logan Allen, Brian Klatt, Philip Quirk, Yaseen Shaikh
Succinct Non-interactive Arguments of Knowledge (SNARKs) enable a party to cryptographically prove a statement regarding a computation to another party that has constrained resources. Practical use of SNARKs often involves a Zero-Knowledge Virtual Machine (zkVM) that receives an input program and input data, then generates a SNARK proof of the correct execution of the input program. Most zkVMs emulate the von Neumann architecture and must prove relations between a program's execution and its use of Random Access Memory. However, there are conceptually simpler models of computation that are naturally modeled in a SNARK yet are still practical for use. Nock is a minimal, homoiconic combinator function, a Turing-complete instruction set that is practical for general computation, and is notable for its use in Urbit.
We introduce Eden, an Efficient Dyck Encoding of Nock that serves as a practical, SNARK-friendly combinator function and instruction set architecture. We describe arithmetization techniques and polynomial equations used to represent the Eden ISA in an Interactive Oracle Proof. Eden provides the ability to prove statements regarding the execution of any program that compiles down to the Eden ISA. We present the Eden zkVM, a particular instantiation of Eden as a zk-STARK.
We introduce Eden, an Efficient Dyck Encoding of Nock that serves as a practical, SNARK-friendly combinator function and instruction set architecture. We describe arithmetization techniques and polynomial equations used to represent the Eden ISA in an Interactive Oracle Proof. Eden provides the ability to prove statements regarding the execution of any program that compiles down to the Eden ISA. We present the Eden zkVM, a particular instantiation of Eden as a zk-STARK.
Daphné Trama, Pierre-Emmanuel Clet, Aymen Boudguiga, Renaud Sirdey
Since the pioneering work of Gentry, Halevi, and Smart in 2012, the state of the art on transciphering has moved away from work on AES to focus on new symmetric algorithms that are better suited for a homomorphic execution. Yet, with recent advances in homomorphic cryptosystems, the question arises as to where we stand today. Especially since AES execution is the application that may be chosen by NIST in the FHE part of its future call for threshold encryption.
In this paper, we propose an AES implementation using TFHE programmable bootstrapping which runs in less than a minute on an average laptop. We detail the transformations carried out on the original AES code to lead to a more efficient homomorphic evaluation and we also give several execution times on different machines, depending on the type of execution (sequential or parallelized). These times vary from 4.5 minutes (resp. 54 secs) for sequential (resp. parallel) execution on a standard laptop down to 28 seconds for a parallelized execution over 16 threads on a multi-core workstation.
Victor Shoup
Recently, a number of highly optimized threshold signing protocols for Schnorr signatures have been proposed. A key feature of these protocols is that they produce so-called "presignatures" in a "offline" phase (using a relatively heavyweight, high-latency subprotocol), which are then consumed in an "online" phase to generate signatures (using a relatively lightweight, low-latency subprotocol). The idea is to build up a large cache of presignatures in periods of low demand, so as to be able to quickly respond to bursts of signing requests in periods of high demand. Unfortunately, it is well known that using such presignatures naively leads to subexponential attacks. Thus, any protocols based on presignatures must mitigate against these attacks.
One such notable protocol is FROST, which provides security even with an unlimited number of presignatures; moreover, assuming unused presignatures are available, signing requests can be processed concurrently with minimal latency. Unfortunately, FROST is not a robust protocol, at least in the asynchronous communication model (arguably the most realistic model for such a protocol). Indeed, a single corrupt party can prevent any signatures from being produced. Recently, a protocol called ROAST was developed to remedy this situation. Unfortunately, ROAST is significantly less efficient that FROST (each signing request runs many instances of FROST concurrently).
A more recent protocol is SPRINT, which provides robustness without synchrony assumptions, and actually provides better throughput than FROST. Unfortunately, SPRINT is only secure in very restricted modes of operation. Specifically, to avoid a subexponential attack, only a limited number of presignatures may be produced in advance of signing requests, which somewhat defeats the purpose of presignatures.
Our main new result is to show how to securely combine the techniques used in FROST and SPRINT, allowing one to build a threshold Schnorr signing protocol that (i) is secure and robust without synchrony assumptions (like SPRINT), (ii) provides security even with an unlimited number of presignatures, and (assuming unused presignatures are available) signing requests can be processed concurrently with minimal latency (like FROST), (iii) achieves high throughput (like SPRINT), and (iv) achieves optimal resilience.
Besides achieving this particular technical result, one of our main goals in this paper is to provide a unifying framework in order to better understand the techniques used in various protocols. To that end, we attempt to isolate and abstract the main ideas of each protocol, stripping away superfluous details, so that these ideas can be more readily combined and implemented in different ways. More specifically, to the extent possible, we try to avoid talking about distributed protocols at all, and rather, we examine the security of the ordinary, non-threshold Schnorr scheme in "enhanced" attack modes that correspond to attacks on various types of threshold signing protocols.
Another one of our goals to carry out a security analysis of these enhanced attack modes in the Generic Group Model (GGM), sometimes in conjunction with the Random Oracle Model (ROM). Despite the limitations of these models, we feel that giving security proofs in the GGM or GGM+ROM provides useful insight into the concrete security of the various enhanced attack modes we consider.
One such notable protocol is FROST, which provides security even with an unlimited number of presignatures; moreover, assuming unused presignatures are available, signing requests can be processed concurrently with minimal latency. Unfortunately, FROST is not a robust protocol, at least in the asynchronous communication model (arguably the most realistic model for such a protocol). Indeed, a single corrupt party can prevent any signatures from being produced. Recently, a protocol called ROAST was developed to remedy this situation. Unfortunately, ROAST is significantly less efficient that FROST (each signing request runs many instances of FROST concurrently).
A more recent protocol is SPRINT, which provides robustness without synchrony assumptions, and actually provides better throughput than FROST. Unfortunately, SPRINT is only secure in very restricted modes of operation. Specifically, to avoid a subexponential attack, only a limited number of presignatures may be produced in advance of signing requests, which somewhat defeats the purpose of presignatures.
Our main new result is to show how to securely combine the techniques used in FROST and SPRINT, allowing one to build a threshold Schnorr signing protocol that (i) is secure and robust without synchrony assumptions (like SPRINT), (ii) provides security even with an unlimited number of presignatures, and (assuming unused presignatures are available) signing requests can be processed concurrently with minimal latency (like FROST), (iii) achieves high throughput (like SPRINT), and (iv) achieves optimal resilience.
Besides achieving this particular technical result, one of our main goals in this paper is to provide a unifying framework in order to better understand the techniques used in various protocols. To that end, we attempt to isolate and abstract the main ideas of each protocol, stripping away superfluous details, so that these ideas can be more readily combined and implemented in different ways. More specifically, to the extent possible, we try to avoid talking about distributed protocols at all, and rather, we examine the security of the ordinary, non-threshold Schnorr scheme in "enhanced" attack modes that correspond to attacks on various types of threshold signing protocols.
Another one of our goals to carry out a security analysis of these enhanced attack modes in the Generic Group Model (GGM), sometimes in conjunction with the Random Oracle Model (ROM). Despite the limitations of these models, we feel that giving security proofs in the GGM or GGM+ROM provides useful insight into the concrete security of the various enhanced attack modes we consider.
Amit Jana, Anup Kumar Kundu, Goutam Paul
At Asiacrypt 2021, Baksi et al. proposed DEFAULT, the first block cipher which provides differential fault attack (DFA) resistance at the algorithm level, with 64-bit DFA security. Initially, the cipher employed a simple key schedule where a single key was XORed throughout the rounds, and the key schedule was updated by incorporating round-independent keys in a rotating fashion. However, at Eurocrypt 2022, Nageler et al. presented a DFA that compromised the claimed DFA security of DEFAULT, reducing it by up to 20 bits for the simple key schedule and allowing for unique key recovery in the case of rotating keys. In this work, we present an enhanced differential fault attack (DFA) on the DEFAULT cipher, showcasing its effectiveness in uniquely recovering the encryption key. We commence by determining the deterministic computation of differential trails for up to five rounds. Leveraging these computed trails, we apply the DFA to the simple key schedule, injecting faults at different rounds and estimating the minimum number of faults required for successful key retrieval. Our attack achieves key recovery with minimal faults compared to previous approaches. Additionally, we extend the DFA attack to rotating keys, first recovering equivalent keys with fewer faults in the DEFAULT-LAYER, and subsequently applying the DFA separately to the DEFAULT-CORE. Furthermore, we propose a generic DFA approach for round-independent keys in the DEFAULT cipher. Lastly, we introduce a new paradigm of fault attack that combines SFA and DFA for any linear structured SBOX based cipher, enabling more efficient key recovery in the presence of both rotating and round-independent key configurations. We call this technique Statistical-Differential Fault Attack (SDFA). Our results shed light on the vulnerabilities of the DEFAULT cipher and highlight the challenges in achieving robust DFA protection for linear structure SBOX-based ciphers.
Charlotte Hoffmann, Mark Simkin
Threshold secret sharing allows a dealer to split a secret $s$ into $n$ shares, such that any $t$ shares allow for reconstructing $s$, but no $t-1$ shares reveal any information about $s$. Leakage-resilient secret sharing requires that the secret remains hidden, even when an adversary additionally obtains a limited amount of leakage from every share.
Benhamouda et al. (CRYPTO'18) proved that Shamir's secret sharing scheme is one bit leakage-resilient for reconstruction threshold $t\geq0.85n$ and conjectured that the same holds for $t=c\cdot n$ for any constant $0\leq c\leq1$. Nielsen and Simkin (EUROCRYPT'20) showed that this is the best one can hope for by proving that Shamir's scheme is not secure against one-bit leakage when $t=c\cdot n/\log(n)$.
In this work, we strengthen the lower bound of Nielsen and Simkin. We consider noisy leakage-resilience, where a random subset of leakages is replaced by uniformly random noise. We prove a lower bound for Shamir's secret sharing, similar to that of Nielsen and Simkin, which holds even when a constant fraction of leakages is replaced by random noise. To this end, we first prove a lower bound on the share size of any noisy-leakage-resilient sharing scheme. We then use this lower bound to show that there exist universal constants $c_1,c_2$, such that for infinitely many $n$, it holds that Shamir's secret sharing scheme is not noisy-leakage-resilient for $t\leq c_1\cdot n/\log(n)$, even when a $c_2$ fraction of leakages are replaced by random noise.
Benhamouda et al. (CRYPTO'18) proved that Shamir's secret sharing scheme is one bit leakage-resilient for reconstruction threshold $t\geq0.85n$ and conjectured that the same holds for $t=c\cdot n$ for any constant $0\leq c\leq1$. Nielsen and Simkin (EUROCRYPT'20) showed that this is the best one can hope for by proving that Shamir's scheme is not secure against one-bit leakage when $t=c\cdot n/\log(n)$.
In this work, we strengthen the lower bound of Nielsen and Simkin. We consider noisy leakage-resilience, where a random subset of leakages is replaced by uniformly random noise. We prove a lower bound for Shamir's secret sharing, similar to that of Nielsen and Simkin, which holds even when a constant fraction of leakages is replaced by random noise. To this end, we first prove a lower bound on the share size of any noisy-leakage-resilient sharing scheme. We then use this lower bound to show that there exist universal constants $c_1,c_2$, such that for infinitely many $n$, it holds that Shamir's secret sharing scheme is not noisy-leakage-resilient for $t\leq c_1\cdot n/\log(n)$, even when a $c_2$ fraction of leakages are replaced by random noise.
Omid Mir, Balthazar Bauer, Scott Griffy, Anna Lysyanskaya, Daniel Slamanig
Anonymous credentials (AC) have emerged as a promising privacy-preserving solu- tion for user-centric identity management. They allow users to authenticate in an anonymous and unlinkable way such that only required information (i.e., attributes) from their credentials are re- vealed. With the increasing push towards decentralized systems and identity, e.g., self-sovereign identity (SSI) and the concept of verifiable credentials, this also necessitates the need for suit- able AC systems. For instance, when relying on existing AC systems, obtaining credentials from different issuers requires the presentation of independent credentials, which can become cum- bersome. Consequently, it is desirable for AC systems to support the so-called multi-authority (MA) feature. It allows a compact and efficient showing of multiple credentials from different is- suers. Another important property is called issuer hiding (IH). This means that showing a set of credentials is not revealed which issuer has issued which credentials but only whether a verifier- defined policy on the acceptable set of issuers is satisfied. This issue becomes particularly acute in the context of MA, where a user could be uniquely identified by the combination of issuers in their showing. Unfortunately, there are no AC schemes that satisfy both these properties simul- taneously. To close this gap, we introduce the concept of Issuer-Hiding Multi-Authority Anonymous Cre- dentials (IhMA). Our proposed solution involves the development of two new signature primi- tives with versatile randomization features which are independent of interest: 1) Aggregate Sig- natures with Randomizable Tags and Public Keys (AtoSa) and 2) Aggregate Mercurial Signatures (ATMS), which extend the functionality of AtoSa to additionally support the randomization of messages and yield the first instance of an aggregate (equivalence-class) structure-preserving sig- nature. These primitives can be elegantly used to obtain IhMA with different trade-offs but have applications beyond. We formalize all notations and provide rigorous security definitions for our proposed primi- tives. We present provably secure and efficient instantiations of the two primitives as well as corresponding IhMA systems. Finally, we provide benchmarks based on an implementation to demonstrate the practical efficiency of our constructions
Binbin Tu, Xiangling Zhang, Yujie Bai, Yu Chen
Private computation on (labeled) set intersection (PCSI/PCLSI) is a secure computation protocol that allows two parties to compute fine-grained functions on set intersection, including cardinality, cardinality-sum, secret shared intersection and arbitrary functions. Recently, some computationally efficient PCSI protocols have emerged, but a limitation on these protocols is the communication complexity, which scales (super)-linear with the size of the large set. This is of particular concern when performing PCSI in the unbalanced case, where one party is a constrained device with a small set, and the other is a service provider holding a large set.
In this work, we first formalize a new ideal functionality called shared characteristic and its labeled variety called shared characteristic with labels, from which we propose the frameworks of PCSI/PCLSI protocols. By instantiating our frameworks, we obtain a series of efficient PCSI/PCLSI protocols, whose communication complexity is linear in the size of the small set, and logarithmic in the large set.
We demonstrate the practicality of our protocols with implementations. Experiment results show that our protocols outperform previous ones and the larger difference between the sizes of two sets, the better our protocols perform. For input set sizes $2^{10}$ and $2^{22}$ with items of length $128$ bits, our PCSI requires only $4.62$MB of communication to compute the cardinality; $4.71$MB of communication to compute the cardinality-sum. Compared with the state-of-the-art PCSI proposed by Chen et al., there are $ 58 \times$ and $77 \times$ reductions in the communication cost of computing cardinality and cardinality-sum.
In this work, we first formalize a new ideal functionality called shared characteristic and its labeled variety called shared characteristic with labels, from which we propose the frameworks of PCSI/PCLSI protocols. By instantiating our frameworks, we obtain a series of efficient PCSI/PCLSI protocols, whose communication complexity is linear in the size of the small set, and logarithmic in the large set.
We demonstrate the practicality of our protocols with implementations. Experiment results show that our protocols outperform previous ones and the larger difference between the sizes of two sets, the better our protocols perform. For input set sizes $2^{10}$ and $2^{22}$ with items of length $128$ bits, our PCSI requires only $4.62$MB of communication to compute the cardinality; $4.71$MB of communication to compute the cardinality-sum. Compared with the state-of-the-art PCSI proposed by Chen et al., there are $ 58 \times$ and $77 \times$ reductions in the communication cost of computing cardinality and cardinality-sum.
Sahar Mazloom, Benjamin E. Diamond, Antigoni Polychroniadou, Tucker Balch
We introduce a new data-independent priority queue which supports amortized polylogarithmic-time insertions and constant-time deletions, and crucially, (non-amortized) constant-time \textit{read-front} operations, in contrast with a prior construction of Toft (PODC'11). Moreover, we reduce the number of required comparisons. Data-independent data structures - first identified explicitly by Toft, and further elaborated by Mitchell and Zimmerman (STACS'14) - facilitate computation on encrypted data without branching, which is prohibitively expensive in secure computation. Using our efficient data-independent priority queue, we introduce a new privacy-preserving dark pool application, which significantly improves upon prior constructions which were based on costly sorting operations.
Dark pools are securities-trading venues which attain ad-hoc order privacy, by matching orders outside of publicly visible exchanges. In this paper, we describe an efficient and secure dark pool (implementing a full continuous double auction), building upon our priority queue construction. Our dark pool's security guarantees are cryptographic - based on secure multiparty computation (MPC) - and do not require that the dark pool operators be trusted. Our approach improves upon the asymptotic and concrete efficiency attained by previous efforts. Existing cryptographic dark pools process new orders in time which grows linearly in the size of the standing order book; ours does so in polylogarithmic time. We describe a concrete implementation of our protocol, with malicious security in the honest majority setting. We also report benchmarks of our implementation, and compare these to prior works. Our protocol reduces the total running time by several orders of magnitude, compared to prior secure dark pool solutions.
Anasuya Acharya, Carmit Hazay, Oxana Poburinnaya, Muthuramakrishnan Venkitasubramaniam
This work introduces the notion of secure multiparty computation: MPC with fall-back security. Fall-back security for an $n$-party protocol is defined with respect to an adversary structure $\mathcal{Z}$ wherein security is guaranteed in the presence of both a computationally unbounded adversary with adversary structure $\mathcal{Z}$, and a computationally bounded adversary corrupting an arbitrarily large subset of the parties. This notion was considered in the work of Chaum (Crypto 89) via the Spymaster's double agent problem where he showed a semi-honest secure protocol for the honest majority adversary structure.
Our first main result is a compiler that can transform any $n$-party protocol that is semi-honestly secure with statistical security tolerating an adversary structure $\mathcal{Z}$ to one that (additionally) provides semi-honest fall-back security w.r.t $\mathcal{Z}$. The resulting protocol has optimal round complexity, up to a constant factor, and is optimal in assumptions and the adversary structure. Our second result fully characterizes when malicious fall-back security is feasible. More precisely, we show that malicious fallback secure protocol w.r.t $\mathcal{Z}$ exists if and only if $\mathcal{Z}$ admits unconditional MPC against a semi-honest adversary (namely, iff $\mathcal{Z} \in \mathcal{Q}^2$).
Our first main result is a compiler that can transform any $n$-party protocol that is semi-honestly secure with statistical security tolerating an adversary structure $\mathcal{Z}$ to one that (additionally) provides semi-honest fall-back security w.r.t $\mathcal{Z}$. The resulting protocol has optimal round complexity, up to a constant factor, and is optimal in assumptions and the adversary structure. Our second result fully characterizes when malicious fall-back security is feasible. More precisely, we show that malicious fallback secure protocol w.r.t $\mathcal{Z}$ exists if and only if $\mathcal{Z}$ admits unconditional MPC against a semi-honest adversary (namely, iff $\mathcal{Z} \in \mathcal{Q}^2$).
Dan Boneh, Elette Boyle, Henry Corrigan-Gibbs, Niv Gilboa, Yuval Ishai
This paper introduces arithmetic sketching, an abstraction of a primitive that several previous works use to achieve lightweight, low-communication zero-knowledge verification of secret-shared vectors. An arithmetic sketching scheme for a language $\mathcal{L} \in \mathbb{F}^n$ consists of (1) a randomized linear function compressing a long input x to a short “sketch,” and (2) a small arithmetic circuit that accepts the sketch if and only if $x \in \mathcal{L}$, up to some small error. If the language $\mathcal{L}$ has an arithmetic sketching scheme with short sketches, then it is possible to test membership in $\mathcal{L}$ using an arithmetic circuit with few multiplication gates. Since multiplications are the dominant cost in protocols for computation on secret-shared, encrypted, and committed data, arithmetic sketching schemes give rise to lightweight protocols in each of these settings.
Beyond the formalization of arithmetic sketching, our contributions are: – A general framework for constructing arithmetic sketching schemes from algebraic varieties. This framework unifies schemes from prior work and gives rise to schemes for useful new languages and with improved soundness error. – The first arithmetic sketching schemes for languages of sparse vectors: vectors with bounded Hamming weight, bounded $L_1$ norm, and vectors whose few non-zero values satisfy a given predicate. – A method for “compiling” any arithmetic sketching scheme for a language $\mathcal{L}$ into a low-communication malicious-secure multi-server protocol for securely testing that a client-provided secret-shared vector is in $\mathcal{L}$.
We also prove the first nontrivial lower bounds showing limits on the sketch size for certain languages (e.g., vectors of Hamming-weight one) and proving the non-existence of arithmetic sketching schemes for others (e.g., the language of all vectors that contain a specific value).
Beyond the formalization of arithmetic sketching, our contributions are: – A general framework for constructing arithmetic sketching schemes from algebraic varieties. This framework unifies schemes from prior work and gives rise to schemes for useful new languages and with improved soundness error. – The first arithmetic sketching schemes for languages of sparse vectors: vectors with bounded Hamming weight, bounded $L_1$ norm, and vectors whose few non-zero values satisfy a given predicate. – A method for “compiling” any arithmetic sketching scheme for a language $\mathcal{L}$ into a low-communication malicious-secure multi-server protocol for securely testing that a client-provided secret-shared vector is in $\mathcal{L}$.
We also prove the first nontrivial lower bounds showing limits on the sketch size for certain languages (e.g., vectors of Hamming-weight one) and proving the non-existence of arithmetic sketching schemes for others (e.g., the language of all vectors that contain a specific value).
Pedro Branco, Nico Döttling, Akshayaram Srinivasan
Statistical sender privacy (SSP) is the strongest achievable security notion for two-message oblivious transfer (OT) in the standard model, providing statistical security against malicious receivers and computational security against semi-honest senders. In this work we provide a novel construction of SSP OT from the Decisional Diffie-Hellman (DDH) and the Learning Parity with Noise (LPN) assumptions achieving (asymptotically) optimal amortized communication complexity, i.e. it achieves rate 1. Concretely, the total communication complexity for $k$ OT instances is $2k(1+o(1))$, which (asymptotically) approaches the information-theoretic lower bound. Previously, it was only known how to realize this primitive using heavy rate-1 FHE techniques [Brakerski et al., Gentry and Halevi TCC'19].
At the heart of our construction is a primitive called statistical co-PIR, essentially a a public key encryption scheme which statistically erases bits of the message in a few hidden locations. Our scheme achieves nearly optimal ciphertext size and provides statistical security against malicious receivers. Computational security against semi-honest senders holds under the DDH assumption.
At the heart of our construction is a primitive called statistical co-PIR, essentially a a public key encryption scheme which statistically erases bits of the message in a few hidden locations. Our scheme achieves nearly optimal ciphertext size and provides statistical security against malicious receivers. Computational security against semi-honest senders holds under the DDH assumption.
Gauri Gupta, Krithika Ramesh, Anwesh Bhattacharya, Divya Gupta, Rahul Sharma, Nishanth Chandran, Rijurekha Sen
Privacy-preserving machine learning (PPML) promises to train
machine learning (ML) models by combining data spread across
multiple data silos. Theoretically, secure multiparty computation
(MPC) allows multiple data owners to train models on their joint
data without revealing the data to each other. However, the prior
implementations of this secure training using MPC have three limitations: they have only been evaluated on CNNs, and LSTMs have
been ignored; fixed point approximations have affected training
accuracies compared to training in floating point; and due to significant latency overheads of secure training via MPC, its relevance
for practical tasks with streaming data remains unclear.
The motivation of this work is to report our experience of addressing the practical problem of secure training and inference
of models for urban sensing problems, e.g., traffic congestion estimation, or air pollution monitoring in large cities, where data
can be contributed by rival fleet companies while balancing the
privacy-accuracy trade-offs using MPC-based techniques.
Our first contribution is to design a custom ML model for this
task that can be efficiently trained with MPC within a desirable
latency. In particular, we design a GCN-LSTM and securely train
it on time-series sensor data for accurate forecasting, within 7
minutes per epoch. As our second contribution, we build an end-toend system of private training and inference that provably matches
the training accuracy of cleartext ML training. This work is the first
to securely train a model with LSTM cells. Third, this trained model
is kept secret-shared between the fleet companies and allows clients
to make sensitive queries to this model while carefully handling
potentially invalid queries. Our custom protocols allow clients to
query predictions from privately trained models in milliseconds,
all the while maintaining accuracy and cryptographic security
29 June 2023
Yongha Son, Jinhyuck Jeong
Circuit-based Private Set Intersection (circuit-PSI) refers to cryptographic protocols that let two parties with input set $X$ and $Y$ compute a function $f$ over the intersection set $X \cap Y$, without revealing any other information. The research efforts for circuit-PSI mainly focus on the case where input set sizes $|X|$ and $|Y|$ are similar so far, and they scale poorly for extremely unbalanced set sizes $|X| \gg |Y|$. Recently, Lepoint \textit{et al.} (ASIACRYPT'21) proposed the first dedicated solutions for this problem, which has online cost only linear in the small set size $|Y|$. However, it requires an expensive setup phase that requires huge storage of about $O(|X|)$ on the small set holder side, which can be problematic in applications where the small set holder is assumed to have restricted equipment.
In this work, we suggest new efficient proposals for circuit-PSI tailored for unbalanced inputs, which feature {\emph{zero}} small set holder side storage, and comparable online phase performance to the previous work. At the technical core, we use homomorphic encryption (HE) based {\emph{plain}} PSI protocols of Cong \textit{et al.} (CCS'21), with several technically non-trivial arguments on algorithm and security.
We demonstrate the superiority of our proposals in several input set sizes by an implementation. As a representative example, for input sets of size $2^{24}$ and $2^{12}$, our proposals require {\emph{zero}} storage on the small set holder whereas Lepoint \textit{et al.} requires over $7$GB. The online phase remains similar; over LAN network setting, ours takes $7.5$ (or $20.9$s) seconds with $45$MB (or $11.7$MB) communication, while Lepoint \textit{et al.} requires $4.2$ seconds with $117$MB communication.
In this work, we suggest new efficient proposals for circuit-PSI tailored for unbalanced inputs, which feature {\emph{zero}} small set holder side storage, and comparable online phase performance to the previous work. At the technical core, we use homomorphic encryption (HE) based {\emph{plain}} PSI protocols of Cong \textit{et al.} (CCS'21), with several technically non-trivial arguments on algorithm and security.
We demonstrate the superiority of our proposals in several input set sizes by an implementation. As a representative example, for input sets of size $2^{24}$ and $2^{12}$, our proposals require {\emph{zero}} storage on the small set holder whereas Lepoint \textit{et al.} requires over $7$GB. The online phase remains similar; over LAN network setting, ours takes $7.5$ (or $20.9$s) seconds with $45$MB (or $11.7$MB) communication, while Lepoint \textit{et al.} requires $4.2$ seconds with $117$MB communication.
Pierre Briaud, Pierre Loidreau
In this work, we introduce a new attack for the Loidreau scheme [PQCrypto 2017] and its more recent variant LowMS. This attack is based on a constrained linear system for which we provide two solving approaches:
- The first one is an enumeration algorithm inspired from combinatorial attacks on the Rank Decoding (RD) Problem. While the attack technique remains very simple, it allows us to obtain the best known structural attack on the parameters of these two schemes.
- The second one is to rewrite it as a bilinear system over Fq. Even if Gröbner basis techniques on this second system seem infeasible, we provide a detailed analysis of the first degree fall polynomials which arise when applying such algorithms.
Estuardo Alpirez Bock, Chris Brzuska, Russell W. F. Lai
Incompressibility is a popular security notion for white-box cryptography and captures that a large encryption program cannot be compressed without losing functionality. Fouque, Karpman, Kirchner and Minaud (FKKM) defined strong incompressibility, where a compressed program should not even help to distinguish encryptions of two messages of equal length. Equivalently, the notion can be phrased as indistinguishability under chosen-plaintext attacks and key-leakage (LK-IND-CPA), where the leakage rate is high.
In this paper, we show that LK-IND-CPA security with superlogarithmic-length leakage, and thus strong incompressibility, cannot be proven under standard (i.e. single-stage) assumptions, if the encryption scheme is key-fixing, i.e. a polynomial number of message-ciphertext pairs uniquely determine the key with high probability.
Our impossibility result refutes a claim by FKKM that their big-key generation mechanism achieves strong incompressibility when combined with any PRG or any conventional encryption scheme, since the claim is not true for encryption schemes which are key-fixing (or for PRGs which are injective). In particular, we prove that the cipher block chaining (CBC) block cipher mode is key-fixing when modelling the cipher as a truly random permutation for each key. Subsequent to and inspired by our work, FKKM prove that their original big-key generation mechanism can be combined with a random oracle into an LK-IND-CPA-secure encryption scheme, circumventing the impossibility result by the use of an idealised model.
Along the way, our work also helps clarifying the relations between incompressible white-box cryptography, big-key symmetric encryption, and general leakage resilient cryptography, and their limitations.
In this paper, we show that LK-IND-CPA security with superlogarithmic-length leakage, and thus strong incompressibility, cannot be proven under standard (i.e. single-stage) assumptions, if the encryption scheme is key-fixing, i.e. a polynomial number of message-ciphertext pairs uniquely determine the key with high probability.
Our impossibility result refutes a claim by FKKM that their big-key generation mechanism achieves strong incompressibility when combined with any PRG or any conventional encryption scheme, since the claim is not true for encryption schemes which are key-fixing (or for PRGs which are injective). In particular, we prove that the cipher block chaining (CBC) block cipher mode is key-fixing when modelling the cipher as a truly random permutation for each key. Subsequent to and inspired by our work, FKKM prove that their original big-key generation mechanism can be combined with a random oracle into an LK-IND-CPA-secure encryption scheme, circumventing the impossibility result by the use of an idealised model.
Along the way, our work also helps clarifying the relations between incompressible white-box cryptography, big-key symmetric encryption, and general leakage resilient cryptography, and their limitations.
Vipul Goyal, Akshayaram Srinivasan, Mingyuan Wang
Consider the standard setting of two-party computation where the sender has a secret function $f$ and the receiver has a secret input $x$ and the output $f(x)$ is delivered to the receiver at the end of the protocol. Let us consider the unidirectional message model where only one party speaks in each round. In this setting, Katz and Ostrovsky (Crypto 2004) showed that at least four rounds of interaction between the parties are needed in the plain model (i.e., no trusted setup) if the simulator uses the adversary in a black-box way (a.k.a. black-box simulation). Suppose the sender and the receiver would like to run multiple sequential iterations of the secure computation protocol on possibly different inputs. For each of these iterations, do the parties need to start the protocol from scratch and exchange four messages?
In this work, we explore the possibility of \textit{amortizing} the round complexity or in other words, \textit{reusing} a certain number of rounds of the secure computation protocol in the plain model. We obtain the following results.
1. Under standard cryptographic assumptions, we construct a four-round two-party computation protocol where (i) the first three rounds of the protocol could be reused an unbounded number of times if the receiver input remains the same and only the sender input changes, and (ii) the first two rounds of the protocol could be reused an unbounded number of times if the receiver input needs to change as well. In other words, the sender sends a single additional message if only its input changes, and in the other case, we need one message each from the receiver and the sender. The number of additional messages needed in each of the above two modes is optimal and, additionally, our protocol allows arbitrary interleaving of these two modes. 2. We also extend these results to the multiparty setting (in the simultaneous message exchange model) and give round-optimal protocols such that (i) the first two rounds could be reused an unbounded number of times if the inputs of the parties need to change and (ii) the first three rounds could be reused an unbounded number of times if the inputs remain the same but the functionality to be computed changes. As in the two-party setting, we allow arbitrary interleaving of the above two modes of operation.
In this work, we explore the possibility of \textit{amortizing} the round complexity or in other words, \textit{reusing} a certain number of rounds of the secure computation protocol in the plain model. We obtain the following results.
1. Under standard cryptographic assumptions, we construct a four-round two-party computation protocol where (i) the first three rounds of the protocol could be reused an unbounded number of times if the receiver input remains the same and only the sender input changes, and (ii) the first two rounds of the protocol could be reused an unbounded number of times if the receiver input needs to change as well. In other words, the sender sends a single additional message if only its input changes, and in the other case, we need one message each from the receiver and the sender. The number of additional messages needed in each of the above two modes is optimal and, additionally, our protocol allows arbitrary interleaving of these two modes. 2. We also extend these results to the multiparty setting (in the simultaneous message exchange model) and give round-optimal protocols such that (i) the first two rounds could be reused an unbounded number of times if the inputs of the parties need to change and (ii) the first three rounds could be reused an unbounded number of times if the inputs remain the same but the functionality to be computed changes. As in the two-party setting, we allow arbitrary interleaving of the above two modes of operation.
Yuting Zuo, Li Xu, Yuexin Zhang, Chenbin Zhao, Zhaozhe Kang
Vehicular Social Networks (VSNs) rely on data shared by users to provide convenient services. Data is outsourced to the cloud server and the distributed roadside unit in VSNs. However, roadside unit has limited resources, so that data sharing process is inefficient and is vulnerable to security threats, such as illegal access, tampering attack and collusion attack. In this article, to overcome the shortcomings of security, we define a chain tolerance semi-trusted model to describe the credibility of distributed group based on the anti tampering feature of blockchain. We further propose a Blockchain-based Lightweight Access Control scheme in VSNs that resist tampering and collusion attacks, called BLAC. To overcome the shortcomings of efficiency, we design a ciphertext piece storage algorithm and a recovery one to achieve lightweight storage cost. In the decryption operation, we separate a pre-decryption algorithm based on outsourcing to achieve lightweight decryption computation cost on the user side. Finally, we present the formal security analyses and the simulation experiments for BLAC, and compare the results of experiments with existing relevant schemes. The security analyses show that our scheme is secure, and the results of experiments show that our scheme is lightweight and practical.
Willow Barkan-Vered, Franklin Harding, Jonathan Keller, Jiayu Xu
ECVRF is a verifiable random function (VRF) scheme used in multiple cryptocurrency systems. It has recently been proven to satisfy the notion of non-malleability which is useful in applications to blockchains (Peikert and Xu, CT-RSA 2023); however, the existing proof uses the rewinding technique and has a quadratic security loss. In this work, we re-analyze the non-malleability of ECVRF in the algebraic group model (AGM) and give a tight proof. We also compare our proof with the unforgeability proof for the Schnorr signature scheme in the AGM (Fuchsbauer, Plouviez and Seurin, EUROCRYPT 2020).
Ran Cohen, Pouyan Forghani, Juan Garay, Rutvik Patel, Vassilis Zikas
It is well known that without randomization, Byzantine agreement (BA) requires a linear number of rounds in the synchronous setting, while it is flat out impossible in the asynchronous setting. The primitive which allows to bypass the above limitation is known as oblivious common coin (OCC). It allows parties to agree with constant probability on a random coin, where agreement is oblivious, i.e., players are not aware whether or not agreement has been achieved.
The starting point of our work is the observation that no known protocol exists for information-theoretic multi-valued OCC---i.e., OCC where the coin might take a value from a domain of cardinality larger than 2---with optimal resiliency in the asynchronous (with eventual message delivery) setting. This apparent hole in the literature is particularly problematic, as multi-valued OCC is implicitly or explicitly used in several constructions. (In fact, it is often falsely attributed to the asynchronous BA result by Canetti and Rabin [STOC ’93], which, however, only achieves binary OCC and does not translate to a multi-valued OCC protocol.)
In this paper, we present the first information-theoretic multi-valued OCC protocol in the asynchronous setting with optimal resiliency, i.e., tolerating $t
We then turn to the problem of round-preserving parallel composition of asynchronous BA. A protocol for this task was proposed by Ben-Or and El-Yaniv [Distributed Computing ’03]. Their construction, however, is flawed in several ways: For starters, it relies on multi-valued OCC instantiated by Canetti and Rabin's result (which, as mentioned above, only provides binary OCC). This shortcoming can be repaired by plugging in our above multi-valued OCC construction. However, as we show, even with this fix it remains unclear whether the protocol of Ben-Or and El-Yaniv achieves its goal of expected-constant-round parallel asynchronous BA, as the proof is incorrect. Thus, as a second contribution, we provide a simpler, more modular protocol for the above task. Finally, and as a contribution of independent interest, we provide proofs in Canetti's Universal Composability framework; this makes our work the first one offering composability guarantees, which are important as BA is a core building block of secure multi-party computation protocols.
The starting point of our work is the observation that no known protocol exists for information-theoretic multi-valued OCC---i.e., OCC where the coin might take a value from a domain of cardinality larger than 2---with optimal resiliency in the asynchronous (with eventual message delivery) setting. This apparent hole in the literature is particularly problematic, as multi-valued OCC is implicitly or explicitly used in several constructions. (In fact, it is often falsely attributed to the asynchronous BA result by Canetti and Rabin [STOC ’93], which, however, only achieves binary OCC and does not translate to a multi-valued OCC protocol.)
In this paper, we present the first information-theoretic multi-valued OCC protocol in the asynchronous setting with optimal resiliency, i.e., tolerating $t
We then turn to the problem of round-preserving parallel composition of asynchronous BA. A protocol for this task was proposed by Ben-Or and El-Yaniv [Distributed Computing ’03]. Their construction, however, is flawed in several ways: For starters, it relies on multi-valued OCC instantiated by Canetti and Rabin's result (which, as mentioned above, only provides binary OCC). This shortcoming can be repaired by plugging in our above multi-valued OCC construction. However, as we show, even with this fix it remains unclear whether the protocol of Ben-Or and El-Yaniv achieves its goal of expected-constant-round parallel asynchronous BA, as the proof is incorrect. Thus, as a second contribution, we provide a simpler, more modular protocol for the above task. Finally, and as a contribution of independent interest, we provide proofs in Canetti's Universal Composability framework; this makes our work the first one offering composability guarantees, which are important as BA is a core building block of secure multi-party computation protocols.