## IACR News

Updates on the COVID-19 situation are on the
Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

#### 08 October 2019

###### Chao Liu, Zhongxiang Zheng, Keting Jia, Limin Tao

ePrint Report
Authenticated encryption (AE) is very suitable for a resources constrained environment for it needs less computational costs and AE has become one of the important technologies of modern communication security. Identity concealment is one of research focuses in design and analysis of current secure transport protocols (such as TLS1.3 and Google's QUIC). In this paper, we present a provably secure identity-concealed authenticated encryption in the public-key setting over ideal lattices, referred to as RLWE-ICAE. Our scheme can be regarded as a parallel extension of higncryption scheme proposed by Zhao (CCS 2016), but in the lattice-based setting. RLWE-ICAE can be viewed as a monolithic integration of public-key encryption, key agreement over ideal lattices, identity concealment and digital signature. The security of RLWE-ICAE is directly relied on the Ring Learning with Errors (RLWE) assumption. Two concrete choices of parameters are provided in the end.

###### Marc Fyrbiak, Sebastian Wallat, Jonathan Déchelotte, Nils Albartus, Sinan Böcker, Russell Tessier, Christof Paar

ePrint Report
In today’s Integrated Circuit (IC) production chains, a designer’s valuable Intellectual Property (IP) is transparent to diverse stakeholders and thus inevitably prone to piracy. To protect against this threat, numerous defenses based on the obfuscation of a circuit’s control path, i.e. Finite State Machine (FSM), have been proposed and are commonly believed to be secure. However, the security of these sequential obfuscation schemes is doubtful since realistic capabilities of reverse engineering and subsequent manipulation are commonly neglected in the security analysis. The contribution of our work is threefold: First, we demonstrate how high-level control path information can be automatically extracted from third-party, gate-level netlists. To this end, we extend state-of-the-art reverse engineering algorithms to deal with Field Programmable Gate Array (FPGA) gate-level netlists equipped with FSM obfuscation. Second, on the basis of realistic reverse engineering capabilities we carefully review the security of state-of-the-art FSM obfuscation schemes. We reveal several generic strategies that bypass allegedly secure FSM obfuscation schemes and we practically demonstrate our attacks for a several of hardware designs, including cryptographic IP cores. Third, we present the design and implementation of Hardware Nanomites, a novel obfuscation scheme based on partial dynamic reconfiguration that generically mitigates existing algorithmic reverse engineering.

#### 07 October 2019

###### Karim Baghery

ePrint Report
In ASIACRYPT 2016, Bellare, Fuchsbauer, and Scafuro studied the security of non-interactive zero-knowledge (NIZK) arguments in the face of parameter subversion. They showed that achieving subversion soundness (soundness without trusting to the third party) and standard zero-knowledge is impossible at the same time. On the positive side, in the best case, they showed that one can achieve subversion zero-knowledge (zero-knowledge without trusting to the third party) and soundness at the same time. In this paper, we show that one can amplify their best positive result and construct NIZK arguments that can achieve subversion zero-knowledge and $\textit{simulation}$ (knowledge) soundness at the same time. Simulation (knowledge) soundness is a stronger notion in comparison with (knowledge) soundness, as it also guarantees non-malleability of proofs. Such a stronger security guarantee is a must in practical systems. To prove the result, we show that given a NIZK argument that achieves Sub-ZK and (knowledge) soundness, one can use an OR-based construction to define a new language and build a NIZK argument that will guarantee Sub-ZK and $\textit{simulation}$ (knowledge) soundness at the same time. We instantiate the construction with the state-of-the-art zk-SNARK proposed by Groth [Eurocrypt 2016] and obtain an efficient SNARK that guarantees Sub-ZK and simulation knowledge soundness.

###### Martin R. Albrecht, Vlad Gheorghiu, Eamonn W. Postlethwaite, John M. Schanck

ePrint Report
Quantum variants of lattice sieve algorithms are often used to assess the security of lattice based cryptographic constructions. In this work we provide a heuristic, non-asymptotic, analysis of the cost of several algorithms for near neighbour search on high dimensional spheres. These algorithms are used in lattice sieves. We design quantum circuits for near neighbour algorithms and provide software that numerically optimises algorithm parameters according to various cost metrics. Using this software we estimate the cost of classical and quantum near neighbour search on spheres. We find that quantum search may provide a small speedup in dimensions of cryptanalytic interest, but only under exceedingly optimistic physical and algorithmic assumptions.

###### Morten Øygarden, Patrick Felke, Håvard Raddum, Carlos Cid

ePrint Report
EFLASH is a multivariate public-key encryption scheme proposed by Cartor and Smith-Tone at SAC 2018. In this paper we investigate the hardness of solving the particular equation systems arising from EFLASH, and show that the solving degree for these types of systems is much lower than estimated by the authors. We show that a GrÃ¶bner basis algorithm will produce degree fall polynomials at a low degree for EFLASH systems. In particular we are able to accurately predict the number of these polynomials occurring at step degrees 3 and 4 in our attacks. We performed several experiments using the computer algebra system MAGMA, which indicate that the solving degree is at most one higher than the one where degree fall polynomials occur; moreover, our experiments show that whenever the predicted number of degree fall polynomials is positive, it is exact. Our conclusion is that EFLASH does not offer the level of security claimed by the designers. In particular, we estimate that the EFLASH version with 80-bit security parameters offers at most 69 bits of security.

###### Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Lisa Kohl, Peter Rindal, Peter Scholl

ePrint Report
We consider the problem of securely generating useful instances of two-party correlations, such as many independent copies of a random oblivious transfer (OT) correlation, using a small amount of communication. This problem is motivated by the goal of secure computation with silent preprocessing, where a low-communication input-independent setup, followed by local ("silent") computation, enables a lightweight "non-cryptographic" online phase once the inputs are known.

Recent works of Boyle et al. (CCS 2018, Crypto 2019) achieve this goal with good concrete efficiency for useful kinds of two-party correlations, including OT correlations, under different variants of the Learning Parity with Noise (LPN) assumption, and using a small number of "base" oblivious transfers. The protocols of Boyle et al. have several limitations. First, they require a large number of communication rounds. Second, they are only secure against semi-honest parties. Finally, their concrete efficiency estimates are not backed by an actual implementation. In this work we address these limitations, making three main contributions:

- Eliminating interaction. Under the same assumption, we obtain the first concretely efficient 2-round protocols for generating useful correlations, including OT correlations, in the semi-honest security model. This implies the first efficient 2-round OT extension protocol of any kind and, more generally, protocols for non-interactive secure computation (NISC) that are concretely efficient and have the silent preprocessing feature.

- Malicious security. We provide security against malicious parties (in the random oracle model) without additional interaction and with only a modest concrete overhead; prior to our work, no similar protocols were known with any number of rounds.

- Implementation. Finally, we implemented, optimized, and benchmarked our 2-round OT extension protocol, demonstrating that it offers a more attractive alternative to the OT extension protocol of Ishai et al. (Crypto 2003) in many realistic settings.

Recent works of Boyle et al. (CCS 2018, Crypto 2019) achieve this goal with good concrete efficiency for useful kinds of two-party correlations, including OT correlations, under different variants of the Learning Parity with Noise (LPN) assumption, and using a small number of "base" oblivious transfers. The protocols of Boyle et al. have several limitations. First, they require a large number of communication rounds. Second, they are only secure against semi-honest parties. Finally, their concrete efficiency estimates are not backed by an actual implementation. In this work we address these limitations, making three main contributions:

- Eliminating interaction. Under the same assumption, we obtain the first concretely efficient 2-round protocols for generating useful correlations, including OT correlations, in the semi-honest security model. This implies the first efficient 2-round OT extension protocol of any kind and, more generally, protocols for non-interactive secure computation (NISC) that are concretely efficient and have the silent preprocessing feature.

- Malicious security. We provide security against malicious parties (in the random oracle model) without additional interaction and with only a modest concrete overhead; prior to our work, no similar protocols were known with any number of rounds.

- Implementation. Finally, we implemented, optimized, and benchmarked our 2-round OT extension protocol, demonstrating that it offers a more attractive alternative to the OT extension protocol of Ishai et al. (Crypto 2003) in many realistic settings.

###### Payman Mohassel, Mike Rosulek, Ni Trieu

ePrint Report
Clustering is a common technique for data analysis, which aims to partition data into similar groups. When the data comes from different sources, it is highly desirable to maintain the privacy of each database. In this work, we study a popular clustering algorithm (K-means) and adapt it to the privacy-preserving context.

Our main contributions are to propose: i) communication-efficient protocols for secure two-party multiplication, and ii) batched Euclidean squared distance in the adaptive amortizing setting, when one needs to compute the distance from the same point to other points. These protocols are the key building blocks in many real-world applications such as Bio-metric Identification. Furthermore, we construct a customized garbled circuit for computing the minimum value among shared values.

We implement and evaluate our protocols to demonstrate their practicality and show that they are able to train data-sets that are much larger than in the previous work. For example, our scheme can partition the data-sets of size 100,000 into 5 groups under one hour. The numerical results also show that the proposed protocol reaches a ratio of 91.68% accuracy compared to a K-means plain-text clustering algorithm.

Our main contributions are to propose: i) communication-efficient protocols for secure two-party multiplication, and ii) batched Euclidean squared distance in the adaptive amortizing setting, when one needs to compute the distance from the same point to other points. These protocols are the key building blocks in many real-world applications such as Bio-metric Identification. Furthermore, we construct a customized garbled circuit for computing the minimum value among shared values.

We implement and evaluate our protocols to demonstrate their practicality and show that they are able to train data-sets that are much larger than in the previous work. For example, our scheme can partition the data-sets of size 100,000 into 5 groups under one hour. The numerical results also show that the proposed protocol reaches a ratio of 91.68% accuracy compared to a K-means plain-text clustering algorithm.

###### Srimanta Bhattacharya, Mridul Nandi

ePrint Report
Very recently (in CRYPTO 2017) Dai, Hoang, and Tessaro
have introduced the Chi-square method ($\chi^2$ method) which can be ap-
plied to obtain an upper bound on the statistical distance between two
joint probability distributions. The authors have applied this method to
prove the pseudorandom function security (PRF-security) of sum of two
random permutations. In this work, we revisit their proof and find a
non-trivial gap in the proof. We plug this gap for two specific cases and
state the general case as an assumption whose proof is essential for the
completeness of the proof by Dai et al.. A complete, correct, and trans-
parent proof of the full security of the sum of two random permutations
construction is much desirable, especially due to its importance and two
decades old legacy. The proposed $\chi^2$ method seems to have potential for
application to similar problems, where a similar gap may creep into a
proof. These considerations motivate us to communicate our observation
in a formal way.
On the positive side, we provide a very simple proof of the PRF-security
of the truncated random permutation construction (a method to con-
struct PRF from a random permutation) using the $\chi^2$ method. We note
that a proof of the PRF-security due to Stam is already known for this
construction in a purely statistical context. However, the use of the $\chi^2$
method makes the proof much simpler.

###### Divesh Aggarwal, Maciej Obremski, João Ribeiro, Luisa Siniscalchi, Ivan Visconti

ePrint Report
For more than 30 years, cryptographers have been looking for public sources of uniform randomness in order to use them as a set-up to run appealing cryptographic protocols without relying on trusted third parties. Unfortunately, nowadays it is fair to assess that assuming the existence of physical phenomena producing public uniform randomness is far from reality.

It is known that uniform randomness cannot be extracted from a single weak source. A well-studied way to overcome this is to consider several independent weak sources. However, this means we must trust the various sampling processes of weak randomness from physical processes.

Motivated by the above state of affairs, this work considers a set-up where players can access multiple potential sources of weak randomness, several of which may be jointly corrupted by a computationally unbounded adversary. We introduce SHELA (Somewhere Honest Entropic Look Ahead) sources to model this situation.

We show that there is no hope of extracting uniform randomness from a SHELA source. Instead, we focus on the task of Somewhere-Extraction (i.e., outputting several candidate strings, some of which are uniformly distributed -- yet we do not know which). We give explicit constructions of Somewhere-Extractors for SHELA sources with good parameters.

Then, we present applications of the above somewhere-extractor where the public uniform randomness can be replaced by the output of such extraction from corruptible sources, greatly outperforming trivial solutions. The output of somewhere-extraction is also useful in other settings, such as a suitable source of random coins for many randomized algorithms.

In another front, we comprehensively study the problem of Somewhere-Extraction from a weak source, resulting in a series of bounds. Our bounds highlight the fact that, in most regimes of parameters (including those relevant for applications), SHELA sources significantly outperform weak sources of comparable parameters both when it comes to the process of Somewhere-Extraction, or in the task of amplification of success probability in randomized algorithms. Moreover, the low quality of somewhere-extraction from weak sources excludes its use in various efficient applications.

It is known that uniform randomness cannot be extracted from a single weak source. A well-studied way to overcome this is to consider several independent weak sources. However, this means we must trust the various sampling processes of weak randomness from physical processes.

Motivated by the above state of affairs, this work considers a set-up where players can access multiple potential sources of weak randomness, several of which may be jointly corrupted by a computationally unbounded adversary. We introduce SHELA (Somewhere Honest Entropic Look Ahead) sources to model this situation.

We show that there is no hope of extracting uniform randomness from a SHELA source. Instead, we focus on the task of Somewhere-Extraction (i.e., outputting several candidate strings, some of which are uniformly distributed -- yet we do not know which). We give explicit constructions of Somewhere-Extractors for SHELA sources with good parameters.

Then, we present applications of the above somewhere-extractor where the public uniform randomness can be replaced by the output of such extraction from corruptible sources, greatly outperforming trivial solutions. The output of somewhere-extraction is also useful in other settings, such as a suitable source of random coins for many randomized algorithms.

In another front, we comprehensively study the problem of Somewhere-Extraction from a weak source, resulting in a series of bounds. Our bounds highlight the fact that, in most regimes of parameters (including those relevant for applications), SHELA sources significantly outperform weak sources of comparable parameters both when it comes to the process of Somewhere-Extraction, or in the task of amplification of success probability in randomized algorithms. Moreover, the low quality of somewhere-extraction from weak sources excludes its use in various efficient applications.

###### José Bacelar Almeida, Cécile Baritel-Ruet, Manuel Barbosa, Gilles Barthe, François Dupressoir, Benjamin Grégoire, Vincent Laporte, Tiago Oliveira, Alley Stoughton, Pierre-Yves Strub

ePrint Report
We present a high-assurance and high-speed implementation of the SHA-3 hash function. Our implementation is written in the
Jasmin programming language, and is formally verified for functional correctness, provable security and timing attack resistance in the EasyCrypt proof assistant. Our implementation is the first to achieve simultaneously the four desirable properties (efficiency, correctness, provable security, and side-channel protection) for a non-trivial cryptographic primitive.
Concretely, our mechanized proofs show that: 1) the SHA-3 hash function is indifferentiable from a random oracle, and thus is resistant against collision, first and second preimage attacks; 2) the SHA-3 hash function is correctly implemented by a vectorized x86 implementation. Furthermore, the implementation is provably protected against timing attacks in an idealized model of timing leaks. The proofs include new EasyCrypt libraries of independent interest for programmable random oracles and modular indifferentiability proofs.

###### Orr Dunkelman, Nathan Keller, Eyal Ronen, Adi Shamir

ePrint Report
Boomerang attacks are extensions of differential attacks, that make it
possible to combine two unrelated differential properties of the first and second part of a cryptosystem with probabilities $p$ and $q$ into a new differential-like property of the whole cryptosystem with probability $p^2q^2$ (since each one of the properties has to be satisfied twice). In this paper we describe a new version of boomerang attacks which uses the counterintuitive idea of throwing out most of the data (including potentially good cases) in order to force equalities between certain values on the ciphertext side. This creates a correlation between the four probabilistic events, which increases the probability of the combined property to $p^2q$ and increases the signal to noise ratio of the resultant distinguisher. We call this variant a retracing boomerang attack since we make sure that the boomerang we throw follows the same path on its forward and backward directions.

To demonstrate the power of the new technique, we apply it to the case of 5-round AES. This version of AES was repeatedly attacked by a large variety of techniques, but for twenty years its complexity had remained stuck at $2^{32}$. At Crypto'18 it was finally reduced to $2^{24}$ (for full key recovery), and with our new technique we can further reduce the complexity of full key recovery to the surprisingly low value of $2^{16.5}$ (i.e., only 90,000 encryption/decryption operations are required for a full key recovery on half the rounds of AES).

In addition to improving previous attacks, our new technique unveils a hidden relationship between boomerang attacks and two other cryptanalytic techniques, the yoyo game and the recently introduced mixture differentials.

To demonstrate the power of the new technique, we apply it to the case of 5-round AES. This version of AES was repeatedly attacked by a large variety of techniques, but for twenty years its complexity had remained stuck at $2^{32}$. At Crypto'18 it was finally reduced to $2^{24}$ (for full key recovery), and with our new technique we can further reduce the complexity of full key recovery to the surprisingly low value of $2^{16.5}$ (i.e., only 90,000 encryption/decryption operations are required for a full key recovery on half the rounds of AES).

In addition to improving previous attacks, our new technique unveils a hidden relationship between boomerang attacks and two other cryptanalytic techniques, the yoyo game and the recently introduced mixture differentials.

###### Ivan Damgard, Helene Haagh, Rebekah Mercer, Anca Nitulescu, Claudio Orlandi, Sophia Yakoubov

ePrint Report
Off-the-Record (OTR) messaging is a protocol used to authenticate messages while also giving senders plausible deniability. Multi-Designated Verifier Signatures (MDVS) are a primitive that allows for OTR to be extended to handle group messaging. In group OTR, the sender wants the designated verifiers to be able to authenticate the messages (that is, the signature should be unforgeable), but even if some verifiers are corrupt and collude, they should not be able to prove authenticity to any outsiders (that is, the signature should be source-hiding). We additionally require consistency, meaning that if any one of the designated verifiers can verify an honestly produced signature, then all of them can.

The contributions of this paper are two-fold: stronger definitions, and new constructions meeting those definitions. Existing literature defines and builds limited notions of MDVS, where source-hiding only holds when all verifiers are corrupt. We strengthen source-hiding to support any subset of corrupt verifiers, and give the first formal definition of consistency.

We give two constructions of our stronger notion of MDVS: one from functional encryption, and one from standard primitives such as pseudorandom functions, pseudorandom generators, key agreement and NIZKs. The second construction has somewhat larger signatures, but does not require a trusted setup.

The contributions of this paper are two-fold: stronger definitions, and new constructions meeting those definitions. Existing literature defines and builds limited notions of MDVS, where source-hiding only holds when all verifiers are corrupt. We strengthen source-hiding to support any subset of corrupt verifiers, and give the first formal definition of consistency.

We give two constructions of our stronger notion of MDVS: one from functional encryption, and one from standard primitives such as pseudorandom functions, pseudorandom generators, key agreement and NIZKs. The second construction has somewhat larger signatures, but does not require a trusted setup.

###### Jonas Krautter, Dennis R.E. Gnad, Falk Schellenberg, Amir Moradi, Mehdi B. Tahoori

ePrint Report
Dynamic and partial reconfiguration together with hardware parallelism make FPGAs attractive as virtualized accelerators. However, recently it has been shown that multi-tenant FPGAs are vulnerable to remote side-channel attacks (SCA) from malicious users, allowing them to extract secret keys without a logical connection to the victim core. Typical mitigations against such attacks are hiding and masking schemes, to increase attackers’ efforts in terms of side-channel measurements. However, they require significant efforts and tailoring for a specific algorithm, hardware implementation and mapping. In this paper, we show a hiding countermeasure against voltage-based SCA that can be integrated into any implementation, without requiring modifications or tailoring to the protected module. We place a properly mapped Active Fence of ring oscillators between victim and attacker circuit, enabled as a feedback of an FPGA-based sensor, leading to reduced side-channel leakage. Our experimental results based on a Lattice ECP5 FPGA and an AES-128 module show that two orders of magnitude more traces are needed for a successful key recovery, while no modifications to the underlying cryptographic module are necessary.

###### Yusuke Yoshida, Fuyuki Kitagawa, Keisuke Tanaka

ePrint Report
Non-committing encryption (NCE) was introduced by Canetti et al. (STOC '96). Informally, an encryption scheme is non-committing if it can generate a dummy ciphertext that is indistinguishable from a real one. The dummy ciphertext can be opened to any message later by producing a secret key and an encryption random coin which ``explain'' the ciphertext as an encryption of the message. Canetti et al. showed that NCE is a central tool to achieve multi-party computation protocols secure in the adaptive setting. An important measure of the efficiently of NCE is the ciphertext rate, that is the ciphertext length divided by the message length, and previous works studying NCE have focused on constructing NCE schemes with better ciphertext rates.
We propose an NCE scheme satisfying the ciphertext rate $\mathcal{O}(\log \lambda)$ based on the decisional Diffie-Hellman (DDH) problem, where $\lambda$ is the security parameter. The proposed construction achieves the best ciphertext rate among existing constructions proposed in the plain model, that is, the model without using common reference strings. Previously to our work, an NCE scheme with the best ciphertext rate based on the DDH problem was the one proposed by Choi et al.~(ASIACRYPT '09) that has ciphertext rate $\mathcal{O}(\lambda)$. Our construction of NCE is similar in spirit to that of the recent construction of the trapdoor function proposed by Garg and Hajiabadi (CRYPTO '18).

###### Alexandru Cojocaru, Juan Garay, Aggelos Kiayias, Fang Song, Petros Wallden

ePrint Report
Bitcoin and its underlying blockchain protocol have received recently significant attention in the context of building distributed systems as well as from the perspective of the foundations of the consensus problem.
At the same time, the rapid development of quantum technologies brings the possibility of quantum computing devices from a theoretical concept to an emerging technology. Motivated by this, in this work we revisit the formal security of the core of the Bitcoin protocol, called the Bitcoin backbone, under the assumption that the adversary has access to a scalable quantum computer. We prove that the protocol's essential properties stand in the post-quantum setting assuming a suitably bounded Quantum adversary in the Quantum Random Oracle (QRO) model. Specifically, our results imply that security can be shown by bounding the quantum queries so that each quantum query is worth $O(p^{-1/2})$ classical ones and that the wait time for safe settlement is expanded by a multiplicative factor of $O(p^{-1/6})$, where $p$ is the probability of success of a single classical query to the protocol's underlying hash function.

###### Cristina Pérez-Solà, Alejandro Ranchal-Pedrosa, Jordi Herrera-Joancomartí, Guillermo Navarro-Arribas, Joaquin Garcia-Alfaro

ePrint Report
The Lightning Network (LN) is a payment network running as a second layer on top of Bitcoin and other Blockchains. This paper presents the possibility of performing a balance lockdown in the LN due to misbehaving nodes associated to a given channel. We formalize and introduce a practical attack, minimizing the economic cost of the attack. We present results that validate our claims, and provide experimental results and potential countermeasures to handle the problem.

###### Benjamin R. Curtis, Rachel Player

ePrint Report
In November 2018, the HomomorphicEncryption.org consortium published the Homomorphic Encryption Security Standard.
The Standard recommends several sets of Learning with Errors (LWE) parameters that can be selected by application developers to achieve a target security level \( \lambda \in \{128,192,256\} \).
These parameter sets all involve a power-of-two dimension \( n \leq 2^{15} \), an error distribution of standard deviation \( \sigma \approx 3.19 \), and a secret whose coefficients are either chosen uniformly in \( Z_q \), chosen according to the error distribution, or chosen uniformly in \( \{ -1, 0, 1\} \).
These parameter sets do not necessarily reflect implementation choices in the most commonly used homomorphic encryption libraries.
For example, several libraries support dimensions that are not a power of two.
Moreover, all known implementations for bootstrapping for the CKKS, BFV and BGV schemes use a sparse secret and a large ring dimension such as \( n \in \{ 2^{16}, 2^{17} \} \), and advanced applications such as logistic regression have used equally large dimensions.
This motivates the community to consider widening the recommended parameter sets, and the purpose of this paper is to investigate such possible extensions.
We explore the security of possible sparse-secret LWE parameter sets, taking into account hybrid attacks, which are often the most competitive in the sparse-secret regime.
We present a conservative analysis of the hybrid decoding and hybrid dual attacks for parameter sets of varying sparsity, with the goal of balancing security requirements with bootstrapping efficiency.
We also show how the methodology in the Standard can be easily adapted to support parameter sets with power-of-two dimension \( n \geq 2^{16} \).
We conclude with a number of discussion points to motivate future improvements to the Standard.

###### Steve Thakur

ePrint Report
In this short paper, we provide protocols to batch and aggregate multiple non-membership proofs into a single proof of constant size with bilinear accumulators. We subsequently use the accumulator to construct a bilinear Vector Commitment with constant sized openings and a linear public parameter. We also provide ways to speed up the verification of membership and non-membership proofs and to shift most of the computational burden from the Verifier to the Prover. Furthermore, we have designed the protocols so that the Verifier needs a constant amount of storage for verification despite the linear public parameter. Since all the protocols are public coin, they can me made non-interactive with a Fiat-Shamir heuristic.

#### 06 October 2019

###### Warsaw, Poland, 16 January - 17 January 2020

Event Calendar
Event date: 16 January to 17 January 2020

Submission deadline: 31 October 2019

Notification: 30 November 2019

Submission deadline: 31 October 2019

Notification: 30 November 2019

###### Shanghai, ??, 13 December - 15 December 2019

Event Calendar
Event date: 13 December to 15 December 2019

Submission deadline: 30 October 2019

Notification: 10 November 2019

Submission deadline: 30 October 2019

Notification: 10 November 2019