IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
06 February 2020
Alberto Garoffolo, Dmytro Kaidalov, Roman Oliynykov
ePrint Report
Sidechains are an appealing innovation devised to enable blockchain scalability and extensibility. The basic idea is simple yet powerful: construct a parallel chain - sidechain - with desired features, and provide a way to transfer coins between the mainchain and the sidechain.
In this paper, we introduce Zendoo, a construction for Bitcoin-like blockchain systems that allows the creation and communication with sidechains of different types without knowing their internal structure. We consider a parent-child relationship between the mainchain and sidechains, where sidechain nodes directly observe the mainchain while mainchain nodes only observe cryptographically authenticated certificates from sidechain maintainers. We use zk-SNARKs to construct a universal verifiable transfer mechanism that is used by sidechains.
Moreover, we propose a specific sidechain construction, named Latus, that can be built on top of this infrastructure, and realizes a decentralized verifiable blockchain system for payments. We leverage the use of recursive composition of zk-SNARKs to generate succinct proofs of sidechain state progression that are used to generate certificates validity proofs. This allows the mainchain to efficiently verify all operations performed in the sidechain without knowing any details about those operations.
In this paper, we introduce Zendoo, a construction for Bitcoin-like blockchain systems that allows the creation and communication with sidechains of different types without knowing their internal structure. We consider a parent-child relationship between the mainchain and sidechains, where sidechain nodes directly observe the mainchain while mainchain nodes only observe cryptographically authenticated certificates from sidechain maintainers. We use zk-SNARKs to construct a universal verifiable transfer mechanism that is used by sidechains.
Moreover, we propose a specific sidechain construction, named Latus, that can be built on top of this infrastructure, and realizes a decentralized verifiable blockchain system for payments. We leverage the use of recursive composition of zk-SNARKs to generate succinct proofs of sidechain state progression that are used to generate certificates validity proofs. This allows the mainchain to efficiently verify all operations performed in the sidechain without knowing any details about those operations.
Najmeh Soroush, Vincenzo Iovino, Alfredo Rial, Peter B. Roenne, Peter Y. A. Ryan
ePrint Report
In the standard setting of functional encryption (FE), we assume both the Central Authority (CA) and the encryptors to run their respective algorithms faithfully.
Badrinarayanan et al [ASIACRYPT 2016] put forth the concept of verifiable FE, which essentially guarantees that dishonest encryptors and authorities, even when colluding together, are not able to generate ciphertexts and tokens that give inconsistent results.
They also provide a compiler turning any perfectly correct FE into a verifiable FE, but do not give efficient constructions.
In this paper we improve on this situation by considering Inner-Product Encryption (IPE), which is a special case of functional encryption and a primitive that has attracted wide interest from both practitioners and researchers in the last decade. Specifically, we construct the first efficient verifiable IPE (VIPE) scheme according to the inner-product functionality of Katz, Sahai, and Waters [EUROCRYPT 2008]. To instantiate the general construction of Badrinarayanan et al, we need to solve several additional challenges. In particular, we construct the first efficient perfectly correct IPE scheme. Our VIPE satisfies unconditional verifiability, whereas its privacy relies on the DLin assumption.
In this paper we improve on this situation by considering Inner-Product Encryption (IPE), which is a special case of functional encryption and a primitive that has attracted wide interest from both practitioners and researchers in the last decade. Specifically, we construct the first efficient verifiable IPE (VIPE) scheme according to the inner-product functionality of Katz, Sahai, and Waters [EUROCRYPT 2008]. To instantiate the general construction of Badrinarayanan et al, we need to solve several additional challenges. In particular, we construct the first efficient perfectly correct IPE scheme. Our VIPE satisfies unconditional verifiability, whereas its privacy relies on the DLin assumption.
Hao Chen, Ilia Iliashenko, Kim Laine
ePrint Report
We demonstrate how to reduce the memory overhead of somewhat homomorphic encryption (SHE) while computing on numerical data. We design a hybrid SHE scheme that exploits the packing algorithm of the HEAAN scheme and the variant of the FV scheme by Bootland et al. The ciphertext size of the resulting scheme is 3-18 times smaller than in HEAAN to compute polynomial functions of depth 4 while packing a small number of data values. Furthermore, our scheme has smaller ciphertexts even with larger packing capacities (256-2048 values).
Léo Ducas, Thijs Laarhoven, Wessel P.J. van Woerden
ePrint Report
Following the recent line of work on solving the closest vector problem with preprocessing (CVPP) using approximate Voronoi cells, we improve upon previous results in the following ways:
- We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis--Laarhoven--De Weger [PQCrypto 2019] and Laarhoven~[MathCrypt 2019].
- We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities.
- We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker--Gama--Joux [Cryptology ePrint Archive, 2015]. Using $2^{0.185d + o(d)}$ memory, we can solve a single CVPP instance in $2^{0.264d + o(d)}$ time.
- We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using $2^{0.208d + o(d)}$ memory, we can heuristically solve CVPP instances in $2^{0.234d + o(d)}$ amortized time, for batches of size at least $2^{0.058d + o(d)}$.
Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure.
- We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis--Laarhoven--De Weger [PQCrypto 2019] and Laarhoven~[MathCrypt 2019].
- We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities.
- We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker--Gama--Joux [Cryptology ePrint Archive, 2015]. Using $2^{0.185d + o(d)}$ memory, we can solve a single CVPP instance in $2^{0.264d + o(d)}$ time.
- We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using $2^{0.208d + o(d)}$ memory, we can heuristically solve CVPP instances in $2^{0.234d + o(d)}$ amortized time, for batches of size at least $2^{0.058d + o(d)}$.
Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure.
Zvika Brakerski, Nico Döttling
ePrint Report
The hardness of the Learning with Errors (LWE) problem is by now a cornerstone of the cryptographic landscape. In many of its applications the so called ``LWE secret'' is not sampled uniformly, but comes from a distribution with some min-entropy. This variant, known as ``Entropic LWE'', has been studied in a number of works, starting with Goldwasser et al. (ICS 2010). However, so far it was only known how to prove the hardness of Entropic LWE for secret distributions supported inside a ball of small radius.
In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ball-bounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highly-composite moduli, and using black-box impossibility for arbitrary moduli.
Technically, we show that the entropic hardness of LWE relies on a simple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distribution $s$, given $s+e$, where $e$ is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ball-bounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.
In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ball-bounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highly-composite moduli, and using black-box impossibility for arbitrary moduli.
Technically, we show that the entropic hardness of LWE relies on a simple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distribution $s$, given $s+e$, where $e$ is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ball-bounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.
Saeid Sahraei, Salman Avestimehr
ePrint Report
We introduce InfoCommit, a protocol for polynomial commitment and verification. InfoCommit consists of two phases. An initial commitment phase and an evaluation phase. During the commitment phase, the verifier and the prover engage in a private two-party computation algorithm so that the verifier extracts a private verification key. In the evaluation phase, the verifier is interested in learning the evaluations of the polynomial at several input points. InfoCommit has four main features. Firstly, the verifier is able to detect, with high probability, if the prover has responded with evaluations of the same polynomial that he has initially committed to. Secondly, InfoCommit provides rigorous privacy guarantees for the prover: upon observing the initial commitment and the response provided by the prover to $m$ evaluation requests, the verifier only learns $O(m^2)$ symbols about the coefficients of the polynomial. Thirdly, the verifiability guarantee is unconditional and without the need for a trusted party, while ``bounded storage" is the only assumption underlying the privacy of the algorithm. In particular, both properties hold regardless of the computation power of the two parties. Lastly, InfoCommit is doubly-efficient in the sense that in the evaluation phase, the verifier runs in $O(\sqrt{d})$ and the prover runs in $O(d)$, where $d-1$ is the degree of the polynomial.
Andrew Reinders, Rafael Misoczki, Santosh Ghosh, Manoj Sastry
ePrint Report
BIKE (Bit-flipping Key Encapsulation) is a promising candidate running
in the NIST Post-Quantum Cryptography Standardization process. It is a code-based
cryptosystem that enjoys a simple definition, well-understood underlying security,
and interesting performance. The most critical step in this cryptosystem consists
of correcting errors in a QC-MDPC linear code. The BIKE team proposed variants
of the Bit-Flipping Decoder for this step for Round 1 and 2 of the standardization
process. In this paper, we propose an alternative decoder which is more friendly to
hardware implementations, leading to a latency-area performance comparable to the
literature while introducing power side channel resilience. We also show that our
design can accelerate all key generation, encapsulation and decapsulation operations
using very few common logic building blocks.
Benny Applebaum, Zvika Brakerski, Sanjam Garg, Yuval Ishai, Akshayaram Srinivasan
ePrint Report
We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing *a round preserving reduction* to the task of secure *2-party* computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions.
Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT.
As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed ``non-compact'' variant of 2-party *secret sharing* from 2-round OT.
Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT.
As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed ``non-compact'' variant of 2-party *secret sharing* from 2-round OT.
Xavier Boyen, Thomas Haines, Johannes Mueller
ePrint Report
Mix nets are often used to provide privacy in modern security protocols, through shuffling. Some of the most important applications, such as secure electronic voting, require mix nets that are verifiable. In the literature, numerous techniques have been proposed to make mix nets verifiable. Some of them have also been employed for securing real political elections.
With the looming possibility of quantum computers and their threat to cryptosystems based on classical hardness assumptions, there is significant pressure to migrate mix nets to post-quantum alternatives. At present, no verifiable and practical post-quantum mix net with external auditing is available as a drop-in replacement of existing constructions. In this paper, we give the first such construction.
We propose a verifiable decryption mix net which solely employs practical lattice-based primitives. We formally prove that our mix net provides a high level of verifiability, and even accountability which guarantees that misbehaving mix servers can also be identified. Verification is executed by a (temporarily trusted) public auditor whose role can easily be distributed. We have implemented our completely lattice-based mix net from the bottom up, and provide detailed benchmarks which demonstrate its practicality for real-world post-quantum-secure e-voting.
With the looming possibility of quantum computers and their threat to cryptosystems based on classical hardness assumptions, there is significant pressure to migrate mix nets to post-quantum alternatives. At present, no verifiable and practical post-quantum mix net with external auditing is available as a drop-in replacement of existing constructions. In this paper, we give the first such construction.
We propose a verifiable decryption mix net which solely employs practical lattice-based primitives. We formally prove that our mix net provides a high level of verifiability, and even accountability which guarantees that misbehaving mix servers can also be identified. Verification is executed by a (temporarily trusted) public auditor whose role can easily be distributed. We have implemented our completely lattice-based mix net from the bottom up, and provide detailed benchmarks which demonstrate its practicality for real-world post-quantum-secure e-voting.
Antoine Delignat-Lavaud, Cédric Fournet, Bryan Parno, Jonathan Protzenko, Tahina Ramananandro, Jay Bosamiya, Joseph Lallemand, Itsaka Rakotonirina, Yi Zhou
ePrint Report
We investigate the security of the QUIC record layer, as standardized by the IETF in draft version 24. This version features major differences compared to Google's original protocol and prior IETF drafts. We model packet and header encryption, which uses a custom construction for privacy. To capture its goals, we propose a security definition for authenticated encryption with semi-implicit nonces. We show that QUIC uses an instance of a generic construction parameterized by a standard AEAD-secure scheme and a PRF-secure cipher. We formalize and verify the security of this construction in F*. The proof uncovers interesting limitations of nonce confidentiality, due to the malleability of short headers and the ability to choose the number of least significant bits included in the packet counter. We propose improvements that simplify the proof and increase robustness against strong attacker models. In addition to the verified security model, we also give concrete functional specification for the record layer, and prove that it satisfies important functionality properties (such as successful decryption of encrypted packets) after fixing more errors in the draft. We then provide a high-performance implementation of the record layer that we prove to be memory safe, correct with respect to our concrete specification (inheriting its functional correctness properties), and secure with respect to our verified model. To evaluate this component, we develop a provably-safe implementation of the rest of the QUIC protocol. Our record layer achieves nearly 2 GB/s throughput, and our QUIC implementation's performance is within 21% of an unverified baseline.
Madhurima Mukhopadhyay, Palash Sarkar, Shashank Singh, Emmanuel Thome
ePrint Report
The present work reports progress in discrete logarithm computation for the general medium prime
case using the function field sieve algorithm. A new record discrete logarithm computation over a
1051-bit field having a 22-bit characteristic was performed. This computation builds on and implements
previously known techniques. Analysis indicates that the relation collection and descent steps are within
reach for fields with 32-bit characteristic and moderate extension degrees. It is the linear algebra step
which will dominate the computation time for any discrete logarithm computation over such fields.
Keywords: finite field, discrete logarithm, function field sieve.
05 February 2020
TU Wien, Vienna, Austria
Job Posting
As part of a special measure towards increasing female employment in scientific positions and promoting young researchers, the Faculty of Informatics at the TU Wien invites applications for two full time Assistant Professor positions (tenure track) for women.
The candidates should have an excellent research record, experience in working with industry, relevant postdoctoral experience, and a compelling research vision.
Applications are welcome from any field of computer science. We are particularly interested in candidates working in areas that will complement our existing expertise and lead to fruitful collaborations with other members of the faculty and TU Wien in general. In particular, researchers in security, privacy, and cryptography are encouraged to apply.
Deadline for applications is March 8, 2020. More information (including a dedicated brochure) is available at https://informatics.tuwien.ac.at/job-01201
The candidates should have an excellent research record, experience in working with industry, relevant postdoctoral experience, and a compelling research vision.
Applications are welcome from any field of computer science. We are particularly interested in candidates working in areas that will complement our existing expertise and lead to fruitful collaborations with other members of the faculty and TU Wien in general. In particular, researchers in security, privacy, and cryptography are encouraged to apply.
Deadline for applications is March 8, 2020. More information (including a dedicated brochure) is available at https://informatics.tuwien.ac.at/job-01201
Closing date for applications:
Contact: Matteo Maffei
Amalfi, Italy, 9 September - 11 September 2020
Event Calendar
Event date: 9 September to 11 September 2020
Submission deadline: 19 April 2020
Notification: 14 June 2020
Submission deadline: 19 April 2020
Notification: 14 June 2020
04 February 2020
Patrick Karl, Michael Tempelmeier
ePrint Report
The "Competition for Authenticated Encryption: Security, Applicability, and Robustness" (CAESAR) was the first cryptographic
competition that required designers to use a mandatory hardware API for their implementations.
Recently, a similar hardware API for the NIST Lightweight Cryptography (LWC) project was proposed.
Both APIs feature an accompanying development package to help designers implementing the API.
In this paper, we have an in-depth look on these packages. We analyze the features of both packages, discuss their resource utilization, and demonstrate their impact on Ascon128, SpoC-64, and Gimli implementations on a modern Artix-7 FPGA. Finally, we provide some tweaks and enhancements to further optimize the development package for the LWC API.
In this paper, we have an in-depth look on these packages. We analyze the features of both packages, discuss their resource utilization, and demonstrate their impact on Ascon128, SpoC-64, and Gimli implementations on a modern Artix-7 FPGA. Finally, we provide some tweaks and enhancements to further optimize the development package for the LWC API.
Alex Davidson, Shuichi Katsumata, Ryo Nishimaki, Shota Yamada, Takashi Yamakawa
ePrint Report
Constrained pseudorandom functions (CPRFs) allow learning ``constrained'' PRF keys that can evaluate the PRF on a subset of the input space, or based on some predicate. First introduced by Boneh and Waters [AC13], Kiayias et al. [CCS13] and Boyle et al. [PKC14], they have shown to be a useful cryptographic primitive with many applications. These applications often require CPRFs to be adaptively secure, which allows the adversary to learn PRF values and constrained keys in an arbitrary order. However, there is no known construction of adaptively secure CPRFs based on a standard assumption in the standard model for any non-trivial class of predicates.
Moreover, even if we rely on strong tools such as indistinguishability obfuscation (IO), the state-of-the-art construction of adaptively secure CPRFs in the standard model only supports the limited class of NC1 predicates.
In this work, we develop new adaptively secure CPRFs for various predicates from different types of assumptions in the standard model. Our results are summarized below.
- We construct adaptively secure and $O(1)$-collusion-resistant CPRFs for $t$-conjunctive normal form ($t$-CNF) predicates from one-way functions (OWFs) where $t$ is a constant. Here, $O(1)$-collusion-resistance means that we can allow the adversary to obtain a constant number of constrained keys. Note that $t$-CNF includes bit-fixing predicates as a special case.
- We construct adaptively secure and single-key CPRFs for inner-product predicates from the learning with errors (LWE) assumption. Here, single-key security means that we only allow the adversary to learn one constrained key. Note that inner-product predicates include $t$-CNF predicates for a constant $t$ as a special case. Thus, this construction supports more expressive class of predicates than that supported by the first construction though it loses the collusion-resistance and relies on a stronger assumption.
- We construct adaptively secure and $O(1)$-collusion-resistant CPRFs for all circuits from the LWE assumption and indistinguishability obfuscation (IO).
The first and second constructions are the first CPRFs for any non-trivial predicates to achieve adaptive security outside of the random oracle model or relying on strong cryptographic assumptions. Moreover, the first construction is also the first to achieve any notion of collusion-resistance in this setting. Besides, we prove that the first and second constructions satisfy weak $1$-key privacy, which roughly means that a constrained key does not reveal the corresponding constraint. The third construction is an improvement over previous adaptively secure CPRFs for less expressive predicates based on IO in the standard model.
In this work, we develop new adaptively secure CPRFs for various predicates from different types of assumptions in the standard model. Our results are summarized below.
- We construct adaptively secure and $O(1)$-collusion-resistant CPRFs for $t$-conjunctive normal form ($t$-CNF) predicates from one-way functions (OWFs) where $t$ is a constant. Here, $O(1)$-collusion-resistance means that we can allow the adversary to obtain a constant number of constrained keys. Note that $t$-CNF includes bit-fixing predicates as a special case.
- We construct adaptively secure and single-key CPRFs for inner-product predicates from the learning with errors (LWE) assumption. Here, single-key security means that we only allow the adversary to learn one constrained key. Note that inner-product predicates include $t$-CNF predicates for a constant $t$ as a special case. Thus, this construction supports more expressive class of predicates than that supported by the first construction though it loses the collusion-resistance and relies on a stronger assumption.
- We construct adaptively secure and $O(1)$-collusion-resistant CPRFs for all circuits from the LWE assumption and indistinguishability obfuscation (IO).
The first and second constructions are the first CPRFs for any non-trivial predicates to achieve adaptive security outside of the random oracle model or relying on strong cryptographic assumptions. Moreover, the first construction is also the first to achieve any notion of collusion-resistance in this setting. Besides, we prove that the first and second constructions satisfy weak $1$-key privacy, which roughly means that a constrained key does not reveal the corresponding constraint. The third construction is an improvement over previous adaptively secure CPRFs for less expressive predicates based on IO in the standard model.
Ran Canetti, Pratik Sarkar, Xiao Wang
ePrint Report
Oblivious Transfer (OT) is an important building block for multi-party computation (MPC). Since OT requires expensive public-key operations, efficiency-conscious MPC protocols use an OT extension (OTE) mechanism [Beaver 96, Ishai et al. 03] to provide the functionality of many independent OT instances with the same sender and receiver, using only symmetric-key operations plus few instances of some base OT protocol. Consequently there is significant interest in constructing OTE friendly protocols, namely protocols that, when used as base-OT for OTE, result in extended OT that are both round-efficient and cost-efficient. We present the most efficient OTE-friendly protocol to date.
Specifically:
- Our base protocol incurs only 3 exponentiations per instance.
- Our base protocol results in a 3 round extended OT protocol.
- The extended protocol is UC secure in the Observable Random Oracle Model (ROM) under the CDH assumption.
For comparison, the state of the art for base OTs that result in 3-round OTE are proven only in the programmable ROM, and require 4 exponentiations under Interactive DDH or 6 exponentiations under DDH [Masney-Rindal 19]. We also implement our protocol and benchmark it against the Simplest OT protocol [Chou and Orlandi, Latincrypt 2015], which is the most efficient and widely used OT protocol but not known to suffice for OTE. The computation cost is roughly the same in both cases. Interestingly, our base OT is also 3 rounds. However, we slightly modify the extension mechanism (which normally adds a round) so as to preserve the number of rounds in our case.
Specifically:
- Our base protocol incurs only 3 exponentiations per instance.
- Our base protocol results in a 3 round extended OT protocol.
- The extended protocol is UC secure in the Observable Random Oracle Model (ROM) under the CDH assumption.
For comparison, the state of the art for base OTs that result in 3-round OTE are proven only in the programmable ROM, and require 4 exponentiations under Interactive DDH or 6 exponentiations under DDH [Masney-Rindal 19]. We also implement our protocol and benchmark it against the Simplest OT protocol [Chou and Orlandi, Latincrypt 2015], which is the most efficient and widely used OT protocol but not known to suffice for OTE. The computation cost is roughly the same in both cases. Interestingly, our base OT is also 3 rounds. However, we slightly modify the extension mechanism (which normally adds a round) so as to preserve the number of rounds in our case.
Lucca Hirschi, Lara Schmid, David Basin
ePrint Report
The results of electronic elections should be verifiable so that any cheating is detected. To support this, many protocols employ an electronic bulletin board (BB) for publishing data that can be read by participants and used to perform verifiability checks. We demonstrate that the BB is itself a security-critical component that has often been treated far too casually in previous designs and analyses. In particular, we present novel attacks on the e-voting protocols Helios, Civitas, and Belenios that violate some of their central security claims under realistic system assumptions. These attacks were outside the scope of prior security analyses as their verifiability notions assume an idealized BB.
To enable the analysis of protocols under realistic assumptions about the BB, we introduce a new verifiability definition that is applicable to arbitrary BBs. We identify a requirement, called final-agreement, and formally prove that it is sufficient and, in most cases, necessary to achieve verifiability. We then propose a BB protocol that satisfies final-agreement under weak, realistic trust assumptions and provide a machine-checked proof thereof. Our protocol can be used as a replacement for existing BBs, enabling verifiability under much weaker trust assumptions.
To enable the analysis of protocols under realistic assumptions about the BB, we introduce a new verifiability definition that is applicable to arbitrary BBs. We identify a requirement, called final-agreement, and formally prove that it is sufficient and, in most cases, necessary to achieve verifiability. We then propose a BB protocol that satisfies final-agreement under weak, realistic trust assumptions and provide a machine-checked proof thereof. Our protocol can be used as a replacement for existing BBs, enabling verifiability under much weaker trust assumptions.
Christoph Dobraunig, Florian Mendel, Bart Mennink
ePrint Report
We analyze the authenticated encryption algorithm of ORANGE, a submission to the NIST lightweight cryptography standardization process. We show that it is practically possible to craft forgeries out of two observed transmitted messages that encrypt the same plaintext. The authors of ORANGE have confirmed the attack, and they discuss a fix for this attack in their second-round submission of ORANGE to the NIST lightweight cryptography competition.
Ryan Amos, Marios Georgiou, Aggelos Kiayias, Mark Zhandry
ePrint Report
We define the notion of one-shot signatures, which are signatures where any secret key can be used to sign only a single message, and then self-destructs. While such signatures are of course impossible classically, we construct one-shot signatures using quantum no-cloning. In particular, we show that such signatures exist relative to a classical oracle, which we can then heuristically obfuscate using known indistinguishability obfuscation schemes.
We show that one-shot signatures have numerous applications for hybrid quantum/classical cryptographic tasks, where all communication is required to be classical, but local quantum operations are allowed. Applications include one-time signature tokens, quantum money with classical communication, decentralized blockchain-less cryptocurrency, signature schemes with unclonable secret keys, non-interactive certifiable min-entropy, and more. We thus position one-shot signatures as a powerful new building block for novel quantum cryptographic protocols.
We show that one-shot signatures have numerous applications for hybrid quantum/classical cryptographic tasks, where all communication is required to be classical, but local quantum operations are allowed. Applications include one-time signature tokens, quantum money with classical communication, decentralized blockchain-less cryptocurrency, signature schemes with unclonable secret keys, non-interactive certifiable min-entropy, and more. We thus position one-shot signatures as a powerful new building block for novel quantum cryptographic protocols.
Frank Schuhmacher
ePrint Report
We suggests a relaxed freshness paradigm for challenge-response authentication for each field of application where challenger and responder are tightly coupled and authentication takes place in a friendly environment. Replay attacks are not feasible under this premise, and freshness can be relaxed to relative freshness: no refresh is required as long as all previously tested responders were authentic. One field of application is anti-counterfeiting of electronic device components. The main contribution is a formal security proof of an authentication scheme with choked refresh. A practical implication is the lifetime increase of stored challenge-response-pairs. This is a considerable advantage for solutions based on hardware intrinsic security. For solutions based on symmetric keys, it opens the possibility to use challenge-response-pairs instead of secret keys by the challenger a cheap way to reduce the risk of key disclosure.