IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
28 April 2020
Rohit Chatterjee, Xiao Liang, Omkant Pandey
ePrint Report
We close the gap between black-box and non-black-box constructions of $\mathit{composable}$ secure multiparty computation in the plain model under the $\mathit{minimal}$ assumption of semi-honest oblivious transfer. The notion of protocol composition we target is $\mathit{angel\text{-}based}$ security, or more precisely, security with super-polynomial helpers. In this notion, both the simulator and the adversary are given access to an oracle called an $\mathit{angel}$ that can perform some predefined super-polynomial time task. Angel-based security maintains the attractive properties of the universal composition framework while providing meaningful security guarantees in complex environments without having to trust anyone.
Angel-based security can be achieved using non-black-box constructions in $\max(R_{\mathsf{OT}},\widetilde{O}(\log n))$ rounds where $R_{\mathsf{OT}}$ is the round-complexity of the semi-honest oblivious transfer. However, currently, the best known $\mathit{black\text{-}box}$ constructions under the same assumption require $\max(R_{\mathsf{OT}},\widetilde{O}(\log^2 n))$ rounds. If $R_{\mathsf{OT}}$ is a constant, the gap between non-black-box and black-box constructions can be a multiplicative factor $\log n$. We close this gap by presenting a $\max(R_{\mathsf{OT}},\widetilde{O}(\log n))$-round black-box construction. We achieve this result by constructing constant-round 1-1 CCA-secure commitments assuming only black-box access to one-way functions.
Angel-based security can be achieved using non-black-box constructions in $\max(R_{\mathsf{OT}},\widetilde{O}(\log n))$ rounds where $R_{\mathsf{OT}}$ is the round-complexity of the semi-honest oblivious transfer. However, currently, the best known $\mathit{black\text{-}box}$ constructions under the same assumption require $\max(R_{\mathsf{OT}},\widetilde{O}(\log^2 n))$ rounds. If $R_{\mathsf{OT}}$ is a constant, the gap between non-black-box and black-box constructions can be a multiplicative factor $\log n$. We close this gap by presenting a $\max(R_{\mathsf{OT}},\widetilde{O}(\log n))$-round black-box construction. We achieve this result by constructing constant-round 1-1 CCA-secure commitments assuming only black-box access to one-way functions.
Gennaro Avitabile, Vincenzo Botta, Vincenzo Iovino, Ivan Visconti
ePrint Report
Mass surveillance can be more easily achieved leveraging fear and desire of the population to feel protected while affected by devastating events. In such cases governments are more legitimate to adopt exceptional measures that limit civil rights, usually receiving large support from their citizens.
The COVID-19 pandemic is currently affecting the freedom and life style of many citizens in the world. People are forced to stay home for several weeks, unemployment rates quickly increase, uncertainty and sadness generate an impelling desire to join any government effort in order to stop as soon as possible the spread of the virus.
Following recommendations of epidemiologists, governments are proposing the use of smartphone applications to allow automatic contact tracing of citizens. Such systems can be an effective way to defeat the spread of the SARS-CoV-2 virus since they allow to gain time in identifying potentially new infected persons that should therefore be in quarantine. This raises the natural question of whether this form of automatic contact tracing can be a subtle weapon for governments to violate the privacy of their citizens as part of new and more sophisticated mass surveillance programs.
In order to preserve privacy and at the same time to contribute to the containment of the pandemic, several research partnerships are proposing privacy-preserving contact-tracing systems where pseudonyms are updated periodically to avoid linkability attacks. A core component of such systems is Bluetooth low energy (BLE, for short) a technology that allows two smartphones to detect that they are in close proximity. Among such systems there are some proposals like DP-3T, PACT and the Apple&Google exposure notification system that through a decentralized approach guarantee better privacy properties compared to other centralized approaches (e.g., PEPP-PT-NTK, PEPP-PT-ROBERT). On the other hand, advocates of centralized approaches claim that centralization gives to epidemiologists more useful data, therefore allowing to take more effective actions to defeat the virus.
Motivated by Snowden's revelations about previous attempts of governments to realize mass surveillance programs, in this paper we first analyze mass surveillance attacks that leverage weaknesses of automatic contact systems. We focus in particular on the DP-3T system (still our analysis is significant also for PACT and Apple&Google systems) that has been endorsed by Apple&Google. The endorsement has the impact of integrating in the forthcoming update of Android and iOS special features like a synchronous rotation of the BLE MAC address of the smartphone with the update of the pseudonyms of the DP-3T system.
Based on recent literature and new findings, we discuss how a government can exploit the use of DP-3T to successfully mount privacy attacks as part of a mass surveillance program.
Interestingly, we also show that the privacy issues in DP-3T are not intrinsic in any BLE-based contact tracing system. Indeed, we propose a different system named $\textsf{Pronto-C2}$ that, in our view, enjoys a much better resilience with respect to mass surveillance attacks still relying on BLE. $\textsf{Pronto-C2}$ is based on a paradigm shift: instead of asking smartphones to send keys to the Big Brother (this corresponds to the approach of DP-3T), we construct a decentralized BLE-based ACT system where smartphones anonymously and confidentially talk to each other in the presence of the Big Brother.
$\textsf{Pronto-C2}$ can optionally be implemented using Blockchain technology, offering complete transparency and resilience through full decentralization, therefore being more appealing for citizens. Only through a large participation of citizens contact-tracing systems can be very useful to defeat COVID-19, and our proposal goes straight in this direction.
The COVID-19 pandemic is currently affecting the freedom and life style of many citizens in the world. People are forced to stay home for several weeks, unemployment rates quickly increase, uncertainty and sadness generate an impelling desire to join any government effort in order to stop as soon as possible the spread of the virus.
Following recommendations of epidemiologists, governments are proposing the use of smartphone applications to allow automatic contact tracing of citizens. Such systems can be an effective way to defeat the spread of the SARS-CoV-2 virus since they allow to gain time in identifying potentially new infected persons that should therefore be in quarantine. This raises the natural question of whether this form of automatic contact tracing can be a subtle weapon for governments to violate the privacy of their citizens as part of new and more sophisticated mass surveillance programs.
In order to preserve privacy and at the same time to contribute to the containment of the pandemic, several research partnerships are proposing privacy-preserving contact-tracing systems where pseudonyms are updated periodically to avoid linkability attacks. A core component of such systems is Bluetooth low energy (BLE, for short) a technology that allows two smartphones to detect that they are in close proximity. Among such systems there are some proposals like DP-3T, PACT and the Apple&Google exposure notification system that through a decentralized approach guarantee better privacy properties compared to other centralized approaches (e.g., PEPP-PT-NTK, PEPP-PT-ROBERT). On the other hand, advocates of centralized approaches claim that centralization gives to epidemiologists more useful data, therefore allowing to take more effective actions to defeat the virus.
Motivated by Snowden's revelations about previous attempts of governments to realize mass surveillance programs, in this paper we first analyze mass surveillance attacks that leverage weaknesses of automatic contact systems. We focus in particular on the DP-3T system (still our analysis is significant also for PACT and Apple&Google systems) that has been endorsed by Apple&Google. The endorsement has the impact of integrating in the forthcoming update of Android and iOS special features like a synchronous rotation of the BLE MAC address of the smartphone with the update of the pseudonyms of the DP-3T system.
Based on recent literature and new findings, we discuss how a government can exploit the use of DP-3T to successfully mount privacy attacks as part of a mass surveillance program.
Interestingly, we also show that the privacy issues in DP-3T are not intrinsic in any BLE-based contact tracing system. Indeed, we propose a different system named $\textsf{Pronto-C2}$ that, in our view, enjoys a much better resilience with respect to mass surveillance attacks still relying on BLE. $\textsf{Pronto-C2}$ is based on a paradigm shift: instead of asking smartphones to send keys to the Big Brother (this corresponds to the approach of DP-3T), we construct a decentralized BLE-based ACT system where smartphones anonymously and confidentially talk to each other in the presence of the Big Brother.
$\textsf{Pronto-C2}$ can optionally be implemented using Blockchain technology, offering complete transparency and resilience through full decentralization, therefore being more appealing for citizens. Only through a large participation of citizens contact-tracing systems can be very useful to defeat COVID-19, and our proposal goes straight in this direction.
Ran Canetti, Nikolaos Makriyannis, Udi Peled
ePrint Report
Building on the protocol of Gennaro and Goldfeder (CCS 18), we present a threshold ECDSA protocol,for any number of signatories and any threshold, that improves as follows over the state of the art:
* Signature generation takes only 4 rounds (down from the current 8 rounds), with a comparable computational cost. Furthermore, 3 of these rounds can take place in a preprocessing stage before the signed message is known, lending to the first non-interactive threshold ECDSA protocol.
* The protocol withstands adaptive corruption of signatories. Furthermore, it includes a periodic refresh mechanism and offers full proactive security.
* The protocol realizes an ideal threshold signature functionality within the UC framework, in the global random oracle model, assuming Strong RSA, semantic security of Paillier encryption, and a somewhat enhanced variant of existential unforgeability of ECDSA.
These properties (low latency, compatibility with cold-wallet architectures, proactive security, and composable security) make the protocol ideal for threshold wallets for ECDSA-based cryptocurrencies.
* Signature generation takes only 4 rounds (down from the current 8 rounds), with a comparable computational cost. Furthermore, 3 of these rounds can take place in a preprocessing stage before the signed message is known, lending to the first non-interactive threshold ECDSA protocol.
* The protocol withstands adaptive corruption of signatories. Furthermore, it includes a periodic refresh mechanism and offers full proactive security.
* The protocol realizes an ideal threshold signature functionality within the UC framework, in the global random oracle model, assuming Strong RSA, semantic security of Paillier encryption, and a somewhat enhanced variant of existential unforgeability of ECDSA.
These properties (low latency, compatibility with cold-wallet architectures, proactive security, and composable security) make the protocol ideal for threshold wallets for ECDSA-based cryptocurrencies.
Hilder Vitor Lima Pereira
ePrint Report
We propose a leveled homomorphic encryption scheme based on the Approximate
Greatest Common Divisor (AGCD) problem that operates natively on vectors and
matrices.
To overcome the limitation of large ciphertext expansion that is typical in
AGCD-based schemes, we randomize the ciphertexts with a hidden matrix,
which allows us to choose smaller parameters.
To be able to efficiently evaluate circuits with large multiplicative depth, we
use a decomposition technique à la GSW.
The running times and ciphertext sizes are practical: for instance, for
100 bits of security, we can
perform a sequence of 128 homomorphic products between
128-dimensional vectors and
$128\times 128$ matrices in less than one second.
We show how to use our scheme to homomorphically evaluate
nondeterministic finite automata
and also a Naïve Bayes Classifier.
We also present a generalization of the GCD attacks against the some variants of the AGCD problem.
Thomas Haines, Johannes Mueller
ePrint Report
Since David Chaum introduced the idea of mix nets 40 years ago, they have become widely used building blocks for privacy-preserving protocols. Several important applications, such as secure e-voting, require that the employed mix net be verifiable. In the literature, numerous techniques have been proposed to make mix nets verifiable. Some of them have also been employed in politically binding elections.
Verifiable mix nets differ in many aspects, including their precise verifiability levels, possible trust assumptions, and required cryptographic primitives; unfortunately, these differences are often opaque, making comparison painful.
To shed light on this intransparent state of affairs, we provide the following contributions. For each verifiability technique proposed to date, we first precisely describe how the underlying basic mix net is to be extended and which (additional) cryptographic primitives are required, and then study its verifiability level, including possible trust assumptions, within one generic and expressive verifiability framework. Based on our uniform treatment, we are able to transparently compare all known verifiability techniques for mix nets, including their advantages and limitations.
Altogether, our work offers a detailed and expressive reference point for the design, employment, and comparison of verifiable mix nets.
Verifiable mix nets differ in many aspects, including their precise verifiability levels, possible trust assumptions, and required cryptographic primitives; unfortunately, these differences are often opaque, making comparison painful.
To shed light on this intransparent state of affairs, we provide the following contributions. For each verifiability technique proposed to date, we first precisely describe how the underlying basic mix net is to be extended and which (additional) cryptographic primitives are required, and then study its verifiability level, including possible trust assumptions, within one generic and expressive verifiability framework. Based on our uniform treatment, we are able to transparently compare all known verifiability techniques for mix nets, including their advantages and limitations.
Altogether, our work offers a detailed and expressive reference point for the design, employment, and comparison of verifiable mix nets.
Fraunhofer AISEC
ePrint Report
In this paper, we review different approaches on proximity tracing apps which are supposed to automate the current labor-intensive contact tracing approach conducted by national health officials. The purpose of these apps is to automatically notify people who are at risk of being infected with SARS-CoV-2 to interrupt infection chains as early as possible.
However, a privacy-preserving and yet functional and scalable design of such apps is not trivial and in some parts leads to counter-intuitive properties. This paper reviews the most prominent European approaches, DP-3T, the German variant "NTK"' of PEPP-PT, and its closely related concept ROBERT. We discuss their design decisions from a privacy perspective and point out the fundamentally different adversary models assumed by the approaches. In addition, we touch on practical aspects such as scalability and ease of implementation.
Yongwoo Lee, Joonwoo Lee, Young-Sik Kim, Jong-Seon No
ePrint Report
Since Cheon et al. introduced an approximate homomorphic encryption scheme for complex numbers called Cheon-Kim-Kim-Song (CKKS) scheme, it has been widely used and applied in real-life situations, such as privacy-preserving machine learning.
The polynomial approximation of a modulus reduction is the most difficult part of the bootstrapping for the CKKS scheme.
In this paper, we cast the problem of finding an approximate polynomial for a modulus reduction into an L2-norm minimization problem.
As a result, we find an approximate polynomial for the modulus reduction without using the sine function, which is the upper bound for the approximation of the modulus reduction.
With the proposed method, we can reduce the degree of the polynomial required for an approximate modulus reduction, while also reducing the error compared with the most recent result reported by Han et al. (CT-RSA' 20).
Consequently, we can achieve a low-error approximation, such that the maximum error is less than $2^{-40}$ for the size of the message $m/q\approx 2^{-10}$.
By using the proposed method, the constraint of $q = O(m^{3/2})$ is relaxed as $O(m)$, and thus the level loss in bootstrapping can be reduced.
The solution of the cast problem is determined in an efficient manner without iteration.
Emmanouil Doulgerakis, Thijs Laarhoven, Benne de Weger
ePrint Report
Motivated by recent results on solving large batches of closest vector problem (CVP) instances, we study how these techniques can be combined with lattice enumeration to obtain faster methods for solving the shortest vector problem (SVP) on high-dimensional lattices.
Theoretically, under common heuristic assumptions we show how to solve SVP in dimension $d$ with a cost proportional to running a sieve in dimension $d - \Theta(d / \log d)$, resulting in a $2^{\Theta(d / \log d)}$ speedup and memory reduction compared to running a full sieve. Combined with techniques from [Ducas, Eurocrypt 2018] we can asymptotically get a total of $[\log(13/9) + o(1)] \cdot d / \log d$ dimensions \textit{for free} for solving SVP.
Practically, the main obstacles for observing a speedup in moderate dimensions appear to be that the leading constant in the $\Theta(d / \log d)$ term is rather small; that the overhead of the (batched) slicer may be large; and that competitive enumeration algorithms heavily rely on aggressive pruning techniques, which appear to be incompatible with our algorithms. These obstacles prevented this asymptotic speedup (compared to full sieving) from being observed in our experiments. However, it could be expected to become visible once optimized CVPP techniques are used in higher dimensional experiments.
Theoretically, under common heuristic assumptions we show how to solve SVP in dimension $d$ with a cost proportional to running a sieve in dimension $d - \Theta(d / \log d)$, resulting in a $2^{\Theta(d / \log d)}$ speedup and memory reduction compared to running a full sieve. Combined with techniques from [Ducas, Eurocrypt 2018] we can asymptotically get a total of $[\log(13/9) + o(1)] \cdot d / \log d$ dimensions \textit{for free} for solving SVP.
Practically, the main obstacles for observing a speedup in moderate dimensions appear to be that the leading constant in the $\Theta(d / \log d)$ term is rather small; that the overhead of the (batched) slicer may be large; and that competitive enumeration algorithms heavily rely on aggressive pruning techniques, which appear to be incompatible with our algorithms. These obstacles prevented this asymptotic speedup (compared to full sieving) from being observed in our experiments. However, it could be expected to become visible once optimized CVPP techniques are used in higher dimensional experiments.
Jinyu Lu, Yunwen Liu, Tomer Ashur, Bing Sun, Chao Li
ePrint Report
Rotational-XOR cryptanalysis is a cryptanalytic method aimed at finding distinguishable statistical properties in ARX-C ciphers, i.e., ciphers that can be described only using modular addition, cyclic rotation, XOR, and the injection of constants. In this paper we extend RX-cryptanalysis to AND-RX ciphers, a similar design paradigm where the modular addition is replaced by vectorial bitwise AND; such ciphers include the block cipher families Simon and Simeck. We analyse the propagation of RX-differences through AND-RX rounds and develop closed form formula for their expected probability. Finally, we formulate an SMT model for searching RX-characteristics in simon and simeck.
Evaluating our model we find RX-distinguishers of up to 20, 27, and 35 rounds with respective probabilities of $2^{-26}, 2^{-42}$, and $2^{-54}$ for versions of simeck with block sizes of 32, 48, and 64 bits, respectively, for large classes of weak keys in the related-key model. In most cases, these are the longest published distinguishers for the respective variants of simeck.
Interestingly, when we apply the model to the block cipher simon, the best distinguisher we are able to find covers 11 rounds of SIMON32 with probability $2^{-24}$. To explain the gap between simon and simeck in terms of the number of distinguished rounds we study the impact of the key schedule and the specific rotation amounts of the round function on the propagation of RX-characteristics in Simon-like ciphers.
Evaluating our model we find RX-distinguishers of up to 20, 27, and 35 rounds with respective probabilities of $2^{-26}, 2^{-42}$, and $2^{-54}$ for versions of simeck with block sizes of 32, 48, and 64 bits, respectively, for large classes of weak keys in the related-key model. In most cases, these are the longest published distinguishers for the respective variants of simeck.
Interestingly, when we apply the model to the block cipher simon, the best distinguisher we are able to find covers 11 rounds of SIMON32 with probability $2^{-24}$. To explain the gap between simon and simeck in terms of the number of distinguished rounds we study the impact of the key schedule and the specific rotation amounts of the round function on the propagation of RX-characteristics in Simon-like ciphers.
Ruslan V. Skuratovskii
ePrint Report
We consider algebraic affine and projective curves of Edwards [3, 9] over the finite field ${{\text{F}}_{{{p}^{n}}}}$. It is known that many modern cryptosystems [11] can be naturally transformed into elliptic curves [5]. We research Edwards algebraic curves over a finite field, which are one of the most promising supports of sets of points which are used for fast group operations \cite{Bir}. We construct a new method for counting the order of an Edwards curve over a finite field. It should be noted that this method can be applied to the order of elliptic curves due to the birational equivalence between elliptic curves and Edwards curves.
We not only find a specific set of coefficients with corresponding field characteristics for which these curves are supersingular, but we additionally find a general formula by which one can determine whether a curve ${{E}_{d}}[{{\mathbb{F}}_{p}}]$ is supersingular over this field or not. The method we have proposed has much less complexity $O\left( p\log _{2}^{2}p \right)$ at not large values $p$ in comparison with the best Schoof basic algorithm with complexity$O(log_{2}^{8}{{p}^{n}})$, as well as a variant of the Schoof algorithm that uses fast arithmetic, which has complexity $O(log_{2}^{4}{{p}^{n}})$, but works only for Elkis or Atkin primes.
The embedding degree of the supersingular curve of Edwards over ${{\mathbb{F}}_{{{p}^{n}}}}$ in a finite field is investigated and the field characteristic, where this degree is minimal, is found. A birational isomorphism between the Montgomery curve and the Edwards curve is also constructed. A one-to-one correspondence between the Edwards supersingular curves and Montgomery supersingular curves is established.
The criterion of supersingularity for Edwards curves is found over ${{\mathbb{F}}_{{{p}^{n}}}}$. \\
We not only find a specific set of coefficients with corresponding field characteristics for which these curves are supersingular, but we additionally find a general formula by which one can determine whether a curve ${{E}_{d}}[{{\mathbb{F}}_{p}}]$ is supersingular over this field or not. The method we have proposed has much less complexity $O\left( p\log _{2}^{2}p \right)$ at not large values $p$ in comparison with the best Schoof basic algorithm with complexity$O(log_{2}^{8}{{p}^{n}})$, as well as a variant of the Schoof algorithm that uses fast arithmetic, which has complexity $O(log_{2}^{4}{{p}^{n}})$, but works only for Elkis or Atkin primes.
The embedding degree of the supersingular curve of Edwards over ${{\mathbb{F}}_{{{p}^{n}}}}$ in a finite field is investigated and the field characteristic, where this degree is minimal, is found. A birational isomorphism between the Montgomery curve and the Edwards curve is also constructed. A one-to-one correspondence between the Edwards supersingular curves and Montgomery supersingular curves is established.
The criterion of supersingularity for Edwards curves is found over ${{\mathbb{F}}_{{{p}^{n}}}}$. \\
Aaqib Bashir Dar , Auqib Hamid Lone, Saniya Zahoor, Afshan Amin Khan, Roohie Naaz
ePrint Report
Contact Tracing is considered as the first and the most effective step towards containing
an outbreak, as resources for mass testing and large quantity of vaccines are highly unlikely
available for immediate utilisation. Effective contact tracing can allow societies
to reopen from lock-down even before availability of vaccines. The objective of mobile
contact tracing is to speed up the manual interview based contact tracing process for
containing an outbreak efficiently and quickly. In this paper we throw light on some of
the issues and challenges pertaining to the adaption of mobile contact tracing for fighting
COVID-19. In essence we proposed an Evaluation framework for mobile contact
tracing solutions to determine their feasibility and effectiveness. We evaluate some of
the available contact tracing solutions in light of our proposed framework. Furthermore
we presented possible attacks that could be launched against contact tracing solutions
with necessary countermeasures to thwart any possibility of such attacks.
Reza Kaboli, Shahram Khazaei, Maghsoud Parviz
ePrint Report
For more than two decades, proving or refuting the following statement has remained a challenging open problem in the theory of secret sharing schemes (SSSs): every ideal access structure admits an ideal perfect multi-linear SSS. We consider a weaker statement in this paper asking if: every ideal access structure admits an ideal perfect group-characterizable (GC) SSS. Since the class of GC SSSs is known to include the multi-linear ones (as well as several classes of non-linear schemes), it might turn out that the second statement is not only true but also easier to tackle. Unfortunately, our understanding of GC SSSs is still too basic to tackle the weaker statement. As a first attempt, it is natural to ask if every ideal perfect SSS is equivalent to some GC scheme. The main contribution of this paper is to construct counterexamples using tools from theory of Latin squares and some recent results developed by the present authors for studying GC SSSs.
Haining Fan
ePrint Report
By associating Fermat's Little Theorem based $GF(2^n)$ inversion algorithms with the multiplicative Norm function,
we present an additive Trace based $GF(2^n)$ inversion algorithm. For elements with Trace value 0, it needs 1 less multiplication operation than Fermat's Little Theorem based algorithms in some $GF(2^n)$s.
James You, Qi Zhang, Curtis D'Alves, Bill O'Farrell, Christopher K. Anand
ePrint Report
Due to growing commercial applications like Blockchain, the performance of large-integer arithmetic is the focus of both academic and industrial research. IBM introduced a new integer fused multiply-add instruction in z14, called VMSL, to accelerate such workloads. Unlike their floating-point counterparts, there are a variety of integer fused multiply-add instruction designs. VMSL multiplies two pairs of radix $2^{56}$ inputs, sums the two results together with an additional 128-bit input, and stores the resulting 128-bit value in a vector register. In this paper, we will describe the unique features of VMSL, the ways in which it is inherently more efficient than alternative specifications, in particular by enabling multiple carry strategies. We will then look at the issues we encountered implementing Montgomery Modular Multiplication for Elliptic Curve Cryptography on z14, including radix choice, mixed radices, instruction selection to trade instruction count for latency, and VMSL-specific optimizations for Montgomery-friendly moduli. The best choices resulted in a 20% increase in throughput.
Ahmet Can Mert, Erdinc Ozturk, Erkay Savas
ePrint Report
This study is an attempt in quest of the fastest hardware algorithms for the computation of the verifiable delay function (VDF), $a^{2^T} \bmod N$, proposed for use in various distributed protocols, in which no party is assumed to compute it significantly faster than other participants. To this end, we propose a class of modular squaring algorithms suitable for low-latency ASIC implementations. The proposed algorithms aim to achieve highest levels of parallelization that have not been explored in previous works in the literature, which usually pursue more balanced optimization of speed and area. For this, we utilize redundant representations of integers and introduce three modular squaring algorithms that work with integers in redundant forms: i) Montgomery algorithm, ii) memory-based algorithm and iii) direct reduction algorithm for fixed moduli. All algorithms enable $O(\log k)$ depth circuit implementations, where $k$ is the bit-size of the modulus $N$ in the VDF function. We analyze and compare gate level-circuits of the proposed algorithms and provide estimates for their critical path delay and gate count.
Tapas Pal, Ratna Dutta
ePrint Report
In this work, we propose a slightly stronger variant of witness pseudorandom function (WPRF) defined by Zhandry (TCC 2016), that we call puncturable witness pseudorandom function (pWPRF). It is capable of generating a pseudorandom value corresponding to every statement of an NP language. We utilize the punctured technique to extend applications of WPRF. Specifically, we construct a semi-adaptively secure offline witness encryption (OWE) scheme using a pWPRF, an indistinguishability obfuscation (iO) and a symmetric-key encryption (SKE), which enables us to encrypt messages along with NP statements. We show that replacing iO with extractability obfuscation, the OWE turns out to be an extractable offline witness encryption scheme. To gain finer control over data, we further demonstrate how to convert our OWEs into offline functional witness encryption (OFWE) and extractable OFWE. The ciphertext size of current available OWEs grows polynomially with the size of messages, whereas all of our OWEs produce optimal size ciphertexts. Finally, we show that the WPRF of Pal et al. (ACISP 2019) can be extended to a pWPRF and an extractable pWPRF.
Eshan Chattopadhyay, Jesse Goodman, Vipul Goyal, Xin Li
ePrint Report
In a recent work, Kumar, Meka, and Sahai (FOCS 2019) introduced the notion of bounded collusion protocols (BCPs), in which $N$ parties wish to compute some joint function $f:(\{0,1\}^n)^N\to\{0,1\}$ using a public blackboard, but such that only $p$ parties may collude at a time. This generalizes well studied models in multiparty communication complexity, such as the number-in-hand (NIH) and number-on-forehead (NOF) models, which are just endpoints on this rich spectrum. We construct explicit hard functions against this spectrum, and achieve a tradeoff between collusion and complexity. Using this, we obtain improved leakage-resilient secret sharing schemes against bounded collusion protocols.
Our main tool in obtaining hard functions against BCPs are explicit constructions of leakage resilient extractors against BCPs for a wide range of parameters. Kumar et al. (FOCS 2019) studied such extractors and called them cylinder intersection extractors. In fact, such extractors directly yield correlation bounds against BCPs. We focus on the following setting: the input to the extractor consists of $N$ independent sources of length $n$, and the leakage function Leak $:(\{0,1\}^n)^N\to\{0,1\}^\mu\in\mathcal{F}$ is a BCP with some collusion bound $p$ and leakage (output length) $\mu$. While our extractor constructions are very general, we highlight some interesting parameter settings:
1. In the case when the input sources are uniform, and $p=0.99N$ parties collude, our extractor can handle $n^{\Omega(1)}$ bits of leakage, regardless of the dependence between $N,n$. The best NOF lower bound (i.e., $p=N-1$) on the other hand requires $N<\log n$ even to handle $1$ bit of leakage.
2. Next, we show that for the same setting as above, we can drop the entropy requirement to $k=$polylog $n$, while still handling polynomial leakage for $p=0.99N$. This resolves an open question about cylinder intersection extractors raised by Kumar et al. (FOCS 2019), and we find an application of such low entropy extractors in a new type of secret sharing.
We also provide an explicit compiler that transforms any function with high NOF (distributional) communication complexity into a leakage-resilient extractor that can handle polylogarithmic entropy and substantially more leakage against BCPs. Thus any improvement of NOF lower bounds will immediately yield better leakage-resilient extractors.
Using our extractors against BCPs, we obtain improved $N$-out-of-$N$ leakage-resilient secret sharing schemes. The previous best scheme from Kumar et al. (FOCS 2019) required share size to grow exponentially in the collusion bound, and thus cannot efficiently handle $p=\omega(\log N)$. Our schemes have no dependence of this form, and can thus handle collusion size $p=0.99N$.
Our main tool in obtaining hard functions against BCPs are explicit constructions of leakage resilient extractors against BCPs for a wide range of parameters. Kumar et al. (FOCS 2019) studied such extractors and called them cylinder intersection extractors. In fact, such extractors directly yield correlation bounds against BCPs. We focus on the following setting: the input to the extractor consists of $N$ independent sources of length $n$, and the leakage function Leak $:(\{0,1\}^n)^N\to\{0,1\}^\mu\in\mathcal{F}$ is a BCP with some collusion bound $p$ and leakage (output length) $\mu$. While our extractor constructions are very general, we highlight some interesting parameter settings:
1. In the case when the input sources are uniform, and $p=0.99N$ parties collude, our extractor can handle $n^{\Omega(1)}$ bits of leakage, regardless of the dependence between $N,n$. The best NOF lower bound (i.e., $p=N-1$) on the other hand requires $N<\log n$ even to handle $1$ bit of leakage.
2. Next, we show that for the same setting as above, we can drop the entropy requirement to $k=$polylog $n$, while still handling polynomial leakage for $p=0.99N$. This resolves an open question about cylinder intersection extractors raised by Kumar et al. (FOCS 2019), and we find an application of such low entropy extractors in a new type of secret sharing.
We also provide an explicit compiler that transforms any function with high NOF (distributional) communication complexity into a leakage-resilient extractor that can handle polylogarithmic entropy and substantially more leakage against BCPs. Thus any improvement of NOF lower bounds will immediately yield better leakage-resilient extractors.
Using our extractors against BCPs, we obtain improved $N$-out-of-$N$ leakage-resilient secret sharing schemes. The previous best scheme from Kumar et al. (FOCS 2019) required share size to grow exponentially in the collusion bound, and thus cannot efficiently handle $p=\omega(\log N)$. Our schemes have no dependence of this form, and can thus handle collusion size $p=0.99N$.
Essam Ghadafi
ePrint Report
In this work we first provide a framework for defining a large subset of pairing-based digital signature schemes
which we call Partially Structure-Preserving Signature (PSPS) schemes. PSPS schemes are similar in nature to structure-preserving signatures with the exception that in these schemes messages are scalars from $\Z^n_p$ instead of being source group elements. This class encompasses various existing schemes which have a number of desirable features which makes them an ideal building block for many privacy-preserving cryptographic protocols. They include the widely-used schemes of Camenisch-Lysyanskaya (CRYPTO 2004) and Pointcheval-Sanders (CT-RSA 2016). We then provide various impossibility and lower bound results for variants of this class. Our results include bounds for the signature and verification key sizes as well as lower bounds for achieving strong unforgeability.
We also give a generic framework for transforming variants of PSPS schemes into structure-preserving ones. As part of our contribution, we also give a number of optimal PSPS schemes which may be of independent interest.
Our results aid in understanding the efficiency of pairing-based signature schemes and show a connection between this class of signature schemes and structure-preserving ones.
Lukas Aumayr, Oguzhan Ersoy, Andreas Erwig, Sebastian Faust, Kristina Hostakova, Matteo Maffei, Pedro Moreno-Sanchez, Siavash Riahi
ePrint Report
The widespread adoption of decentralized cryptocurrencies, such as Bitcoin or Ethereum, is currently hindered by their inherently limited transaction rate. One of the most prominent proposals to tackle this scalability issue are payment channels which allow mutually distrusted parties to exchange an arbitrary number of payments in the form of off-chain authenticated messages while posting only a limited number of transactions onto the blockchain. Specifically, two transactions suffice, unless a dispute between these parties occurs, in which case more on-chain transactions are required to restore the correct balance. Unfortunately, popular constructions, such as the Lightning network for Bitcoin, suffer from heavy communication complexity both off-chain and on-chain in case of dispute. Concretely, the communication overhead grows exponentially and linearly, respectively, in the number of applications that run in the channel.
In this work, we introduce and formalize the notion of generalized channels for Bitcoin-like cryptocurrencies. Generalized channels significantly extend the concept of payment channels so as to perform off-chain any operation supported by the underlying blockchain. Besides the gain in expressiveness, generalized channels outperform state-of-the-art payment channel constructions in efficiency, reducing the communication complexity and the on-chain footprint in case of disputes to linear and constant, respectively.
We provide a cryptographic instantiation of generalized channels that is compatible with Bitcoin, leveraging adaptor signatures -- a cryptographic primitive already used in the cryptocurrency literature but formalized as a standalone primitive in this work for the first time. We formally prove the security of our construction in the Universal Composability framework. Furthermore, we conduct an experimental evaluation, demonstrating the expressiveness and performance of generalized channels when used as building blocks for popular off-chain applications, such as channel splitting and payment-channel networks.
Zachary Zaccagni, Ram Dantu
ePrint Report
This paper provides a theoretical background for a new consensus model called Proof of Review (PoR), which extends Algorands blockchain consensus model and reproduces the human mechanism for analyzing reviews through analysis and reputation. Our protocol derives the trustworthiness of a participants reputation through a consensus of these reviews.
In this new protocol, we combined concepts from proof of stake and proof of reputation to ensure a blockchain system comes to consensus on an honest (non-malicious) congruent review. Additionally, we formally prove this protocol provides further security by using a reputation-based stake instead of token-based, using theorems and proofs. We introduce new concepts in using reputation as a stake, where reputation must be earned and can never be purchased, spent, or traded. The stake is calculated based on the reputation -- not tokens -- of a user in the system proportional to the total reputation in the system at the beginning of each round. A round is a set of steps that conclude in a block being added to the blockchain. We also introduce blacklisting as a decisive action where, after a vote, an honest user will blacklist a leader (the participant elected with their proposed block) or reviewer for some egregious maliciousness, preventing further participation.
Five steps are defined that overlay Algorands steps: (1) Evaluation of reviews; (2) Selection of the rounds leader and their associated block containing reviews; (3) Re-evaluation of reviews; (4) Block of reviews agreement (block decision); (5) Block addition to the ledger and reputation adjustment. At every step, verifiers are selected randomly for evaluating the reviews and are anonymous to each other. We provided properties related to reputation and formally proved (Honest Leader, Blacklisting Liveliness and Correctness, Evaluation of the Evaluators Confidence, Gradual Gain, and Swift Loss of Reputation). Finally, we discuss several types of attacks that are common for Proof of Stake and Reputation systems, and how our Proof of Review model (in extending Algorand) addresses those attacks. Furthermore, PoR mitigates several kinds of fake reviews (e.g. spam, trolling, etc.) through analysis.
Preliminary experimental results show that payment transactions have similar completion times, regardless of the stake being tokens or reputation, the mechanism for adjusting reputation reflects expected behavior, and the accuracy of the review evaluation (using sentimental analysis) is substantiated for the dataset given. With this new model, we let the technology evaluate a review and secure it on a blockchain using a reputation-based stake.
Proof of Review also provides third party access to reviewers reputation along with their respective evaluated reviews, a functionality that could benefit numerous industries including retail, academia, hospitality, corporate, and more. Applications could expedite and validate many different types of processes that rely on a form of review and the analysis of that review.
In this new protocol, we combined concepts from proof of stake and proof of reputation to ensure a blockchain system comes to consensus on an honest (non-malicious) congruent review. Additionally, we formally prove this protocol provides further security by using a reputation-based stake instead of token-based, using theorems and proofs. We introduce new concepts in using reputation as a stake, where reputation must be earned and can never be purchased, spent, or traded. The stake is calculated based on the reputation -- not tokens -- of a user in the system proportional to the total reputation in the system at the beginning of each round. A round is a set of steps that conclude in a block being added to the blockchain. We also introduce blacklisting as a decisive action where, after a vote, an honest user will blacklist a leader (the participant elected with their proposed block) or reviewer for some egregious maliciousness, preventing further participation.
Five steps are defined that overlay Algorands steps: (1) Evaluation of reviews; (2) Selection of the rounds leader and their associated block containing reviews; (3) Re-evaluation of reviews; (4) Block of reviews agreement (block decision); (5) Block addition to the ledger and reputation adjustment. At every step, verifiers are selected randomly for evaluating the reviews and are anonymous to each other. We provided properties related to reputation and formally proved (Honest Leader, Blacklisting Liveliness and Correctness, Evaluation of the Evaluators Confidence, Gradual Gain, and Swift Loss of Reputation). Finally, we discuss several types of attacks that are common for Proof of Stake and Reputation systems, and how our Proof of Review model (in extending Algorand) addresses those attacks. Furthermore, PoR mitigates several kinds of fake reviews (e.g. spam, trolling, etc.) through analysis.
Preliminary experimental results show that payment transactions have similar completion times, regardless of the stake being tokens or reputation, the mechanism for adjusting reputation reflects expected behavior, and the accuracy of the review evaluation (using sentimental analysis) is substantiated for the dataset given. With this new model, we let the technology evaluate a review and secure it on a blockchain using a reputation-based stake.
Proof of Review also provides third party access to reviewers reputation along with their respective evaluated reviews, a functionality that could benefit numerous industries including retail, academia, hospitality, corporate, and more. Applications could expedite and validate many different types of processes that rely on a form of review and the analysis of that review.