## IACR News

Updates on the COVID-19 situation are on the
Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

#### 22 July 2021

###### Kyoungbae Jang, Gyeong Ju Song, Hyunji Kim, Hyeokdong Kwon, Wai-Kong Lee, Zhi Hu, Hwajeong Seo

ePrint Report
Optimizing arithmetic operations into quantum circuits to utilize quantum algorithms, such as the Shor algorithm and Grover search algorithm for cryptanalysis, is an active research field in cryptography implementation. In particular, reducing quantum resources is important for efficient implementation.
In this paper, binary field ($GF(2^n)$) Montgomery multiplication in quantum circuits is presented.
We utilize the bit-level Montgomery algorithm to efficiently compute the Montgomery product $C = A \cdot B \cdot r^{-1}$ in the binary field $GF(2^n)$.
Additionally, we also present an efficient Montgomery multiplication quantum circuit in the case where the modulus of $GF(2^n)$ is specified.

###### Nicholas Franzese, Jonathan Katz, Steve Lu, Rafail Ostrovsky, Xiao Wang, Chenkai Weng

ePrint Report
We show a constant-overhead interactive zero-knowledge (ZK) proof system for RAM programs, that is, a ZK proof in which the communication complexity as well as the running times of the prover and verifier scale linearly in the size of the memory N and the running time T of the underlying RAM program. Besides yielding an asymptotic improvement of prior work, our implementation gives concrete performance improvements for RAM-based ZK proofs. In particular, our implementation supports ZK proofs of private read/write accesses to 64 MB of memory (2^{24} 32-bit words) using only 34 bytes of communication per access, a more than 80\times improvement compared to the recent BubbleRAM protocol. We also design a lightweight RISC CPU that can efficiently emulate the MIPS-I instruction set, and for which our ZK proof communicates only \approx 320 bytes per cycle, more than 10\times less than the BubbleRAM CPU. In a 100 Mbps network, we can perform zero-knowledge executions of our CPU (with 64 MB of main memory and 4 MB of program memory) at a clock rate of 6.6 kHz.

###### Donghang Lu, Albert Yu, Aniket Kate, Hemanta Maji

ePrint Report
While the practicality of secure multi-party computation (MPC) has been extensively analyzed and improved over the past decade, we are hitting the limits of efficiency with the traditional approaches of representing the computed functionalities as generic arithmetic or Boolean circuits. This work follows the design principle of identifying and constructing fast and provably-secure MPC protocols to evaluate useful high-level algebraic abstractions; thus, improving the efficiency of all applications relying on them. We present Polymath, a constant-round secure computation protocol suite for the secure evaluation of (multi-variate) polynomials of scalars and matrices, functionalities essential to numerous data-processing applications. Using precise natural precomputation and high-degree of parallelism prevalent in the modern computing environments, Polymath can make latency of secure polynomial evaluations of scalars and matrices independent of polynomial degree and matrix dimensions.

We implement our protocols over the HoneyBadgerMPC library and apply them to two prominent secure computation tasks: privacy-preserving evaluation of decision trees and privacy-preserving evaluation of Markov processes. For the decision tree evaluation problem, we demonstrate the feasibility of evaluating high-depth decision tree models in a general n-party setting. For the Markov process application, we demonstrate that Polymath can compute large powers of transition matrices with better online time and less communication.

We implement our protocols over the HoneyBadgerMPC library and apply them to two prominent secure computation tasks: privacy-preserving evaluation of decision trees and privacy-preserving evaluation of Markov processes. For the decision tree evaluation problem, we demonstrate the feasibility of evaluating high-depth decision tree models in a general n-party setting. For the Markov process application, we demonstrate that Polymath can compute large powers of transition matrices with better online time and less communication.

###### Yuval Ishai, Hang Su, David J. Wu

ePrint Report
Zero-knowledge succinct arguments of knowledge (zkSNARKs) enable efficient privacy-preserving proofs of membership for general NP languages. Our focus in this work is on post-quantum zkSNARKs, with a focus on minimizing proof size. Currently, there is a $1000\times$ gap in the proof size between the best pre-quantum constructions and the best post-quantum ones. Here, we develop and implement new lattice-based zkSNARKs in the designated-verifier preprocessing model. With our construction, after an initial preprocessing step, a proof for an NP relation of size $2^{20}$ is just over 16 KB. Our proofs are $10.3\times$ shorter than previous post-quantum candidates. Compared to previous lattice-based zkSNARKs (also in the designated-verifier preprocessing model), we obtain a $42\times$ reduction in proof size and a $60\times$ reduction in the prover's running time, all while achieving a much higher level of soundness. Finally, compared to the shortest pre-quantum zkSNARKs by Groth (Eurocrypt 2016), the proof size in our lattice-based construction is $131\times$ longer, but the prover's running time is $1.2\times$ faster.

Our construction follows the general blueprint of Bitansky et al. (TCC 2013) and Boneh et al. (Eurocrypt 2017) of combining a linear probabilistically checkable proof (linear PCP) together with a linear-only vector encryption scheme. We develop a concretely-efficient lattice-based instantiation of this compiler by considering quadratic extension fields of moderate characteristic and using linear-only vector encryption over rank-2 module lattices.

Our construction follows the general blueprint of Bitansky et al. (TCC 2013) and Boneh et al. (Eurocrypt 2017) of combining a linear probabilistically checkable proof (linear PCP) together with a linear-only vector encryption scheme. We develop a concretely-efficient lattice-based instantiation of this compiler by considering quadratic extension fields of moderate characteristic and using linear-only vector encryption over rank-2 module lattices.

###### Sayantan Mukherjee, Avishek Majumder

ePrint Report
Broadcast Encryption allows a sender to send a message to more than one receiver. In a typical broadcast encryption, the broadcaster decides the privileged set as in who all can decrypt a particular ciphertext. Gritti et al. (IJIS'16) introduced a new primitive called Broadcast Encryption with Dealership (BED), where the dealer/wholesaler decides the privileged set. This rather recently introduced primitive allows a wholesaler to buy content from the broadcaster and sell it to users. Following their construction, to date, three more constructions of broadcast encryption with dealership have been proposed. Among them, the first showed the BED construction of Gritti et al. (IJIS'16) to be insecure.

All the state-of-the-arts works were unable to fully identify the requirements of a BED scheme. We first identify and propose a new security requirement that has not been considered before. After formally defining a BED scheme, we show simple pairing-based attacks on all previous constructions rendering all of them useless. We then give the first secure BED construction in the composite-order pairing groups. This construction achieves constant-size ciphertext and secret keys but achieves selectively secure message hiding only. We then give our second construction from Li and Gong's (PKC'18) anonymous broadcast encryption. This construction achieves adaptively secure message hiding but has ciphertext size dependent on the size of the privileged set. Following that, we propose our third and final construction that achieves constant size ciphertext in the standard model and achieves adaptive message hiding security.

All the state-of-the-arts works were unable to fully identify the requirements of a BED scheme. We first identify and propose a new security requirement that has not been considered before. After formally defining a BED scheme, we show simple pairing-based attacks on all previous constructions rendering all of them useless. We then give the first secure BED construction in the composite-order pairing groups. This construction achieves constant-size ciphertext and secret keys but achieves selectively secure message hiding only. We then give our second construction from Li and Gong's (PKC'18) anonymous broadcast encryption. This construction achieves adaptively secure message hiding but has ciphertext size dependent on the size of the privileged set. Following that, we propose our third and final construction that achieves constant size ciphertext in the standard model and achieves adaptive message hiding security.

###### Mugurel Barcau, Cristian Lupascu, Vicentiu Pasol, George C. Turcas

ePrint Report
The present work investigates morphisms between encryption schemes, called bridges. By associating an encryption scheme to every such bridge, we define and examine their security. Inspired by the bootstrapping procedure used by Gentry to produce fully homomorphic encryption schemes, we exhibit a general recipe for the construction of bridges and we give various examples. We shall also present an example of a bridge that does not fall in this category.

###### Yi-Fan Tseng, Chun-I Fan, Zi-Cheng Liu

ePrint Report
Nowadays, it is convenient for people to store their data on clouds. To protect the privacy, people tend to encrypt their data before uploading them to clouds. Due to the widespread use of cloud services, public key searchable encryption is necessary for users to search the encrypted files efficiently and correctly. However, the existing public key searchable encryption schemes supporting monotonic queries suffer from either infeasibility in keyword testing or inefficiency such as heavy computing cost of testing, large size of ciphertext or trapdoor, and so on. In this work, we first propose a novel and efficient anonymous key-policy attribute-based encryption (KP-ABE). Then by applying Shen et al.'s generic construction proposed to the proposed anonymous KP-ABE, we obtain an efficient and expressive public key searchable encryption, which to the best of our knowledge achieves the best performance in testing among the existing such schemes. Only 2 pairings is needed in testing. Besides, we also implement our scheme and others with Python for comparing the performance. From the implementation results, our scheme owns the best performance on testing, and the size of ciphertexts and trapdoors are smaller than most of the existing schemes.

###### Michał Andrzejczak, Kris Gaj

ePrint Report
Lattice sieving is currently the leading class of algorithms for solving the shortest vector problem over lattices. The computational difficulty of this problem is the basis for constructing secure post-quantum public-key cryptosystems based on lattices. In this paper, we present a novel massively parallel approach for solving the shortest vector problem using lattice sieving and hardware acceleration.
We combine previously reported algorithms with a proper caching strategy and develop hardware architecture. The main advantage of the proposed approach is eliminating the overhead of the data transfer between a CPU and a hardware accelerator.
The authors believe that this is the first such architecture reported in the literature to date and predict to achieve up to 8 times higher throughput when compared to a multi-core high-performance CPU.
Presented methods can be adapted for other sieving algorithms hard to implement in FPGAs due to the communication and memory bottleneck

###### Alexander May, Julian Nowakowski, Santanu Sarkar

ePrint Report
Let $(N,e)$ be an RSA public key, where $N=pq$ is the product of equal bitsize primes $p,q$. Let $d_p, d_q$ be the corresponding secret CRT-RSA exponents.

Using a Coppersmith-type attack, Takayasu, Lu and Peng (TLP) recently showed that one obtains the factorization of $N$ in polynomial time, provided that $d_p, d_q \leq N^{0.122}$. Building on the TLP attack, we show the first Partial Key Exposure attack on short secret exponent CRT-RSA. Namely, let $N^{0.122} \leq d_p, d_q \leq N^{0.5}$. Then we show that a constant known fraction of the least significant bits (LSBs) of both $d_p, d_q$ suffices to factor $N$ in polynomial time.

Naturally, the larger $d_p,d_q$, the more LSBs are required. E.g. if $d_p, d_q$ are of size $N^{0.13}$, then we have to know roughly a $\frac 1 5$-fraction of their LSBs, whereas for $d_p, d_q$ of size $N^{0.2}$ we require already knowledge of a $\frac 2 3$-LSB fraction. Eventually, if $d_p, d_q$ are of full size $N^{0.5}$, we have to know all of their bits. Notice that as a side-product of our result we obtain a heuristic deterministic polynomial time factorization algorithm on input $(N,e,d_p,d_q)$.

Using a Coppersmith-type attack, Takayasu, Lu and Peng (TLP) recently showed that one obtains the factorization of $N$ in polynomial time, provided that $d_p, d_q \leq N^{0.122}$. Building on the TLP attack, we show the first Partial Key Exposure attack on short secret exponent CRT-RSA. Namely, let $N^{0.122} \leq d_p, d_q \leq N^{0.5}$. Then we show that a constant known fraction of the least significant bits (LSBs) of both $d_p, d_q$ suffices to factor $N$ in polynomial time.

Naturally, the larger $d_p,d_q$, the more LSBs are required. E.g. if $d_p, d_q$ are of size $N^{0.13}$, then we have to know roughly a $\frac 1 5$-fraction of their LSBs, whereas for $d_p, d_q$ of size $N^{0.2}$ we require already knowledge of a $\frac 2 3$-LSB fraction. Eventually, if $d_p, d_q$ are of full size $N^{0.5}$, we have to know all of their bits. Notice that as a side-product of our result we obtain a heuristic deterministic polynomial time factorization algorithm on input $(N,e,d_p,d_q)$.

###### Lior Rotem, Gil Segev

ePrint Report
The Schnorr identification and signature schemes have been amongst the most influential cryptographic protocols of the past three decades. Unfortunately, although the best-known attacks on these two schemes are via discrete-logarithm computation, the known approaches for basing their security on the hardness of the discrete logarithm problem encounter the ``square-root barrier''. In particular, in any group of order $p$ where Shoup's generic hardness result for the discrete logarithm problem is believed to hold (and is thus used for setting concrete security parameters), the best-known $t$-time attacks on the Schnorr identification and signature schemes have success probability $t^2/p$, whereas existing proofs of security only rule out attacks with success probabilities $(t^2/p)^{1/2}$ and $(q_{\mathcal{H}} \cdot t^2/p)^{1/2}$, respectively, where $q_{\mathcal{H}}$ denotes the number of random-oracle queries issued by the attacker.

We establish tighter security guarantees for identification and signature schemes which result from $\Sigma$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is ``$d$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $d$-th moment of the algorithm's running time.

In the concrete context of the discrete logarithm problem, already Shoup's original proof shows that the discrete logarithm problem is $2$-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the $2$-moment hardness of the discrete logarithm problem, any $t$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $(t^2/p)^{2/3}$ and $(q_{\mathcal{H}} \cdot t^2/p)^{2/3}$, respectively.

We establish tighter security guarantees for identification and signature schemes which result from $\Sigma$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is ``$d$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $d$-th moment of the algorithm's running time.

In the concrete context of the discrete logarithm problem, already Shoup's original proof shows that the discrete logarithm problem is $2$-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the $2$-moment hardness of the discrete logarithm problem, any $t$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $(t^2/p)^{2/3}$ and $(q_{\mathcal{H}} \cdot t^2/p)^{2/3}$, respectively.

###### Jiaxin Pan, Benedikt Wagner

ePrint Report
We construct a short and adaptively secure identity-based signature scheme tightly based on the well-known Short Integer Solution (SIS) assumption.
Although identity-based signature schemes can be tightly constructed from either standard signature schemes against adaptive corruptions in the multi-user setting or a two-level hierarchical identity-based encryption scheme, neither of them is known with short signature size and tight security based on the SIS assumption. Here ``short'' means the signature size is independent of the message length, which is in contrast to the tree-based (tight) signatures.
Our approach consists of two steps: Firstly, we give two generic transformations (one with random oracles and the other without) from non-adaptively secure identity-based signature schemes to adaptively secure ones tightly. Our idea extends the similar transformation for digital signature schemes. Secondly, we construct a non-adaptively secure identity-based signature scheme based on the SIS assumption in the random oracle model.

###### Aniruddha Biswas, Palash Sarkar

ePrint Report
The influence of a set of variables on a Boolean function has three separate definitions in the literature, the first due to Ben-Or and Linial (1989), the second due to Fischer et al. (2002) and Blais (2009) and the third due to Tal (2017). The goal of the present work is to carry out a comprehensive study of the notion of influence of a set of variables on a Boolean function. To this end, we introduce a definition of this notion using the auto-correlation function. A modification of the definition leads to the notion of pseudo-influence. Somewhat surprisingly, it turns out that the auto-correlation based definition of influence is equivalent to the definition introduced by Fischer et al. (2002) and Blais (2009) and the notion of pseudo-influence is equivalent to the definition of influence considered by Tal (2017). Extensive analysis of influence and pseduo-influence as well as the Ben-Or and Linial notion of influence is carried out and the relations between these notions are established.

###### Kemal Bicakci, Kemal Ulker, Yusuf Uzunay

ePrint Report
White-box cryptography aims at providing protection against a powerful adversary which is in complete control of the execution environment of the cryptographic operation. Most existing white-box implementations focus on symmetric encryption. In particular, we are not aware of any previous work on general-purpose digital signature schemes secure against white-box attackers. We present white-box implementations for hash-based signatures so that the security against white-box attackers depends not on the availability of a white-box secure pseudorandom function (in addition to a general one-way function). We also present a hash tree-based solution for one-time passwords secure in a white-box attacker context.

###### Stephen Holmes, Liqun Chen

ePrint Report
All cryptocurrencies are not the same. Today, they share a common quantum vulnerability through use of non-quantum safe Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures yet they have very different risks of quantum attack. The risk of attack for a cryptocurrency depends on a number of identified factors such as the block interval time, the vulnerability to an attack that delays the time for an unprocessed transaction to be completed and the behaviour of a cryptocurrency user to increase the cost of a quantum computer attack. Shor’s algorithm can be used to break ECDSA signatures with a quantum computer. This research addresses the two questions: When will a quantum computer be powerful enough to execute Shor's algorithm? How fast would a quantum computer need to be to break a specific cryptocurrency? In this paper we observe that by benchmarking the speed of circuits and the time for quantum addition on quantum computers we can determine when there is a potential threat to a specific cryptocurrency.

###### Cláudia Brito, Pedro Ferreira, Bernardo Portela, Rui Oliveira, João Paulo

ePrint Report
Privacy and security are prime obstacles to the wider adoption of machine learning services offered by cloud computing providers. Namely, trusting users' sensitive data to a third-party infrastructure, vulnerable to both external and internal malicious attackers, restricts many companies from leveraging the scalability and flexibility offered by cloud services.
We propose Soteria, a system for distributed privacy-preserving machine learning that combines the Apache Spark system, and its machine learning library (MLlib), with the confidentiality features provided by Trusted Execution Environments (e.g., Intel SGX).
Soteria supports two main designs, each offering specific guarantees in terms of security and performance.
The first encapsulates most of the computation done by Apache Spark on a secure enclave, thus offering stronger security. The second fine-tunes the Spark operations that must be done at the secure enclave to reduce the needed trusted computing base, and consequently the performance overhead, at the cost of an increased attack surface.
An extensive evaluation of Soteria, with classification, regression, dimensionality reduction, and clustering algorithms, shows that our system outperforms state-of-the-art solutions, reducing their performance overhead by up to 41%. Moreover, we show that privacy-preserving machine learning is achievable while providing strong security guarantees.

###### Shibam Ghosh, Orr Dunkelman

ePrint Report
Division properties, introduced by Todo at Eurocrypt 2015,
are extremely useful in cryptanalysis, are an extension of square attack
(also called saturation attack or integral cryptanalysis). Given their im-
portance, a large number of works tried to offer automatic tools to find
division properties, primarily based on MILP or SAT/SMT. This paper
studies better modeling techniques for finding division properties using
the Constraint Programming and SAT/SMT-based automatic tools. We
use the fact that the Quine-McCluskey algorithm produces a concise
CNF representation corresponding to the division trail table of an Sbox.
As a result, we can offer significantly more compact models, which allow
SAT and Constraint Programming tools to outperform previous results.
To show the strength of our new approach, we look at the NIST lightweight
candidate KNOT and Ascon. We show several new distinguishers with
a lower data complexity for 17-round KNOT-256, KNOT-384 and 19-
round KNOT-512. In addition, for the 5-round Ascon, we get a lower
data distinguisher than the previous division-based results.
Finally, we revisit the method to extend the integral distinguisher by
composing linear layers at the input and output. We provide a formu-
lation to find the optimal number of linear combinations that need to
be considered. As a result of this new formulation, we prove that 18-
round KNOT-256 and KNOT-384 have no integral distinguisher using
conventional division property and we show this more efficiently than
the previous methods.

###### James Bartusek

ePrint Report
We construct a constant-round composable protocol for blind and verifiable classical delegation of quantum computation, and show applications to secure quantum computation with classical communication. In particular, we give the first maliciously-secure multi-party protocols for BQP (bounded-error quantum polynomial-time) computation that only require a single party to have quantum capabilities. Assuming QLWE (the quantum hardness of learning with errors), we obtain the following.
- A six-round protocol between one quantum server and multiple classical clients in the CRS (common random string) model.
- A three-round protocol between one quantum server and multiple classical clients in the PKI (public-key infrastructure) + QRO (quantum random oracle) model.
- A two-message protocol between quantum sender and classical receiver (a quantum non-interactive secure computation protocol), in the QRO model.

The only previously known approach for obtaining composable security of blind classical verification of quantum computation (Gheorghiu and Vidick, FOCS 2019) has inverse polynomial security and requires polynomially many rounds of interaction.

The property we require of classical verification of quantum computation that enables composability is malicious blindness, which stipulates that the prover does not learn anything about the verifier's delegated computation, even if it is able to observe whether or not the verifier accepted the interaction. To construct a protocol with malicious blindness, we use a classical verification protocol for sampBQP computation (Chung et al., Arxiv 2020), which in general has inverse polynomial soundness error, to prove honest evaluation of QFHE (quantum fully-homomorphic encryption) ciphertexts with negligible soundness error. Obtaining a constant-round protocol requires a strong parallel repetition theorem for classical verification of quantum computation, which we show following the "nearly orthogonal projector" proof strategy (Alagic et al., TCC 2020).

The only previously known approach for obtaining composable security of blind classical verification of quantum computation (Gheorghiu and Vidick, FOCS 2019) has inverse polynomial security and requires polynomially many rounds of interaction.

The property we require of classical verification of quantum computation that enables composability is malicious blindness, which stipulates that the prover does not learn anything about the verifier's delegated computation, even if it is able to observe whether or not the verifier accepted the interaction. To construct a protocol with malicious blindness, we use a classical verification protocol for sampBQP computation (Chung et al., Arxiv 2020), which in general has inverse polynomial soundness error, to prove honest evaluation of QFHE (quantum fully-homomorphic encryption) ciphertexts with negligible soundness error. Obtaining a constant-round protocol requires a strong parallel repetition theorem for classical verification of quantum computation, which we show following the "nearly orthogonal projector" proof strategy (Alagic et al., TCC 2020).

###### Edward Eaton, Douglas Stebila, Roy Stracovsky

ePrint Report
Anonymity networks, such as the Tor network, are highly decentralized and make heavy use of ephemeral identities. Both of these characteristics run in direct opposition to a traditional public key infrastructure, so entity authentication in an anonymity network can be a challenge. One system that Tor relies on is key-blinded signatures, which allow public keys to be transformed so that authentication is still possible, but the identity public key is masked. This is used in Tor during onion service descriptor lookup, in which a .onion address is resolved to a rendezvous point through which a client and an onion service can communicate. The mechanism currently used is based on elliptic curve signatures, so a post-quantum replacement will be needed.

We consider four fully post-quantum key-blinding schemes, and prove the unlinkability and unforgeability of all schemes in the random-oracle model. We provide a generic framework for proving unlinkability of key-blinded schemes by reducing to two properties, signing with oracle reprogramming and independent blinding. Of the four schemes, two are based on Round 3 candidates in NIST's post-quantum signature standardization process, Dilithium and Picnic. The other two are based on much newer schemes, CSI-FiSh and LegRoast, which have more favourable characteristics for blinding. CSI-FiSh is based on isogenies and boasts a very small public key plus signature sizes, and its group action structure allows for key-blinding in a straightforward way. LegRoast uses the Picnic framework, but with the Legendre symbol PRF as a symmetric primitive, the homomorphic properties of which can be exploited to blind public keys in a novel way. Our schemes require at most small changes to parameters, and are generally almost as fast as their unblinded counterparts, except for blinded Picnic, for which signing and verifying is roughly half as fast.

We consider four fully post-quantum key-blinding schemes, and prove the unlinkability and unforgeability of all schemes in the random-oracle model. We provide a generic framework for proving unlinkability of key-blinded schemes by reducing to two properties, signing with oracle reprogramming and independent blinding. Of the four schemes, two are based on Round 3 candidates in NIST's post-quantum signature standardization process, Dilithium and Picnic. The other two are based on much newer schemes, CSI-FiSh and LegRoast, which have more favourable characteristics for blinding. CSI-FiSh is based on isogenies and boasts a very small public key plus signature sizes, and its group action structure allows for key-blinding in a straightforward way. LegRoast uses the Picnic framework, but with the Legendre symbol PRF as a symmetric primitive, the homomorphic properties of which can be exploited to blind public keys in a novel way. Our schemes require at most small changes to parameters, and are generally almost as fast as their unblinded counterparts, except for blinded Picnic, for which signing and verifying is roughly half as fast.

###### Thom Wiggers, Simona Samardjiska

ePrint Report
The best algorithms for the Learning Parity with Noise (LPN) problem require sub-exponential time and memory. This often makes memory, and not time, the limiting factor for practical attacks, which seem to be out of reach even for relatively small parameters. In this paper, we try to bring the state-of-the-art in solving LPN closer to the practical realm. We improve upon the existing algorithms by modifying the Coded-BKW algorithm to work under various memory constrains. We correct and expand previous analysis and experimentally verify our findings. As a result we were able to mount practical attacks on the largest parameters reported to date using only $2^{39}$ bits of memory.

###### Jan Bobolz, Fabian Eidens, Raphael Heitjohann, Jeremy Fell

ePrint Report
We present a cryptographic Java library called Cryptimeleon designed for prototyping and benchmarking privacy-preserving cryptographic schemes.
The library is geared towards researchers wanting to implement their schemes (1) as a sanity check for their constructions, and (2) for benchmark numbers in their papers.
To ease the implementation process, Cryptimeleon "speaks the language" of paper writers.
It offers a similar degree of abstraction as is commonly used in research papers.
For example, bilinear groups can be used as the familiar black-box and Schnorr-style proofs can be described on the level of Camenisch-Stadler notation.
It employs several optimizations (such as multi-exponentation) transparently, allowing the developer to phrase computations as written in the paper instead of having to conform to an artificial API for better performance.

Cryptimeleon implements (among others) finite fields, elliptic curve groups and pairings, hashing, Schnorr-style zero-knowledge proofs, accumulators, digital signatures, secret sharing, group signatures, attribute-based encryption, and other modern cryptographic constructions.

In this paper, we present the library, its capabilities, and explain important design decisions.

Cryptimeleon implements (among others) finite fields, elliptic curve groups and pairings, hashing, Schnorr-style zero-knowledge proofs, accumulators, digital signatures, secret sharing, group signatures, attribute-based encryption, and other modern cryptographic constructions.

In this paper, we present the library, its capabilities, and explain important design decisions.