## IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

#### 17 June 2021

###### Vellore, India, 6 January - 8 January 2022
Event Calendar
Event date: 6 January to 8 January 2022
###### Rome, Italy, 20 June - 23 June 2022
Event Calendar
Event date: 20 June to 23 June 2022
###### Aarhus University, Department of Computer Science; Aarhus, Denmark
Job Posting

A fully-funded PhD scholarship is available at the Cryptography and Security Group (https://cs.au.dk/research/cryptography-and-security/) at Aarhus University.

Project description. The PhD candidate will develop new algorithms and implementation techniques to accelerate public-key cryptography in both the classical and postquantum settings. The project includes designing arithmetic algorithms that are friendlier to parallelization, more precise complexity analysis of existing techniques, and aspects of formal verification to ensure correctness and real-world implementation security.

The PhD candidate will also be involved in other educational activities, such as serving as teaching assistant in courses related to his/her expertise.

Qualifications. We are looking for dedicated and enthusiastic applicants, preferably with a Master’s degree in Computer Science/Engineering, Mathematics or related discipline. A BSc. degree with demonstrated research experience is also welcome. A background in cryptography or formal verification is required. Practical experience with software development will be seen as a plus. Further requirements are fluency in English, good reporting/organization skills and being able to work independently.

Application process: You can apply online at https://phd.nat.au.dk/for-applicants/apply-here >Choose August 2021 Call with deadline 1 August 2021 at noon (11.59 AM CEST) and the corresponding project.

Closing date for applications:

Contact: Contact: Diego F. Aranha, Associate Professor of Computer Science, dfaranha (at) cs.au.dk

#### 16 June 2021

###### University of Luxembourg
Job Posting
The University of Luxembourg aspires to be one of Europe’s most highly regarded universities with a distinctly international and interdisciplinary character. It fosters the cross-fertilisation of research and teaching, is relevant to its country, is known worldwide for its research and teaching in targeted areas, and is establishing itself as an innovative model for contemporary European Higher Education. The Faculty of Science, Technology and Medicine (FSTM) contributes multidisciplinary expertise in the fields of Mathematics, Physics, Engineering, Computer Science, Life Sciences and Medicine. Through its dual mission of teaching and research, the FSTM seeks to generate and disseminate knowledge and train new generations of responsible citizens, in order to better understand, explain and advance society and environment we live in. Your Role... Cryptolux is a team of cryptographers and security researchers interested in applied cryptography, cryptanalysis, privacy, network security, cryptographic blockchains and is led by Prof. Alex Biryukov, we are affiliated to the Departement of Computer Science (DCS) and to the Security and Trust center (SnT). You will perform the following tasks: Prepare a doctoral thesis in Cryptography and IT Security Publish research papers and give talks at the top international venues Provide guidance to MSc students and TA for classes, depending on the needs of the team Organize science outreach activities, e.g. science for kids, … Area / Potential topics of the thesis Cryptanalysis and design of cryptographic primitives Lightweight block ciphers, hash functions, authenticated encryption schemes Financial Cryptography (security of distributed ledgers, smart contracts) Privacy Enhancing Technology (Tor-like networks, privacy for cryptocurrencies, blockchains) Design of proofs of work, resource-hard functions, commitment schemes Side-channel attacks and countermeasures White-box cryptography What we expect from you. MSc. degree in Computer Science, Applied Mathematics, Electrical Engineering with outstanding grades (GPA > 85%) Strong mathematical and/or algorithmic CS background (Olympiads, CTFs a plus) Backg

Closing date for applications:

Contact: Alex Biryukov (alex.biryukov@uni.lu)

###### ISAE engineering school, Toulouse, France
Job Posting
Background: In late 2016, NIST issued a call for proposals for the standardization of cryptographic systems resistant to a quantum computer for encryption, key exchange and signature primitives. The standardization process is currently in its third phase. Each submission comes with a software implementation, targeting standard security levels for widespread applications, such as e-commerce. Topic: Adaptation of post-quantum primitives to constrained environments and FPGA implementation During this thesis, the successful candidate will study code-based and/or lattice-based proposals, and propose modifications to the protocols (resulting thus in alternative schemes with new proofs to be provided etc.) so that their hardware implementation can be improved. The candidate will then propose such implementations on a commodity FPGA. The development will be done in C and High-Level Synthesis will be used to transform automatically the C implementation into VHDL. As a consequence the candidate does not need to be proficient in VHDL.

Closing date for applications:

Contact: Applicants should express their interest before July 15th- 23h59 (CEST time) by email to : carlos.aguilar-melchor@isae-supaero.fr, arnaud.dion@isae-supaero.fr, philippe.gaborit@unilim.fr

###### Daniel J. Bernstein, Billy Bob Brumley, Ming-Shing Chen, Nicola Tuveri
ePrint Report
Google's CECPQ1 experiment in 2016 integrated a post-quantum key-exchange algorithm, newhope1024, into TLS 1.2. The Google-Cloudflare CECPQ2 experiment in 2019 integrated a more efficient key-exchange algorithm, ntruhrss701, into TLS 1.3.

This paper revisits the choices made in CECPQ2, and shows how to achieve higher performance for post-quantum key exchange in TLS 1.3 using a higher-security algorithm, sntrup761. Previous work had indicated that ntruhrss701 key generation was much faster than sntrup761 key generation, but this paper makes sntrup761 key generation much faster by generating a batch of keys at once.

Batch key generation is invisible at the TLS protocol layer, but raises software-engineering questions regarding the difficulty of integrating batch key exchange into existing TLS libraries and applications. This paper shows that careful choices of software layers make it easy to integrate fast post-quantum software, including batch key exchange, into TLS with minor changes to TLS libraries and no changes to applications.

As a demonstration of feasibility, this paper reports successful integration of its fast sntrup761 library, via a lightly patched OpenSSL, into an unmodified web browser and an unmodified TLS terminator. This paper also reports TLS 1.3 handshake benchmarks, achieving more TLS 1.3 handshakes per second than any software included in OpenSSL.
###### Hannah Keller, Helen Möllering, Thomas Schneider, Hossein Yalame
ePrint Report
In many machine learning applications, training data consists of sensitive information from multiple sources. Privacy-preserving machine learning using secure computation enables multiple parties to compute on their joint data without disclosing their inputs to each other. In this work, we focus on clustering, an unsupervised machine learning technique that partitions data into groups. Previous works on privacy-preserving clustering often leak information and focus on the k-means algorithm, which provides only limited clustering quality and flexibility. Additionally, the number of clusters k must be known in advance. We analyze several prominent clustering algorithms' capabilities and their compatibility with secure computation techniques to create an efficient, fully privacy-preserving clustering implementation superior to k-means. We find affinity propagation to be the most promising candidate and securely implement it using various multi-party computation techniques. Privacy-preserving affinity propagation does not require any input parameters and consists of operations hat are relatively efficient with secure computation. As threat models, we consider passive security as well as active security with an honest and dishonest majority. We offer the first comparison of privacy-preserving clustering between these scenarios, enabling an understanding of the exact trade-offs between them. Based on the clustering quality and the computational and communication costs, privacy-preserving affinity propagation offers a good trade-off between quality and efficiency for practical privacy-preserving clustering.
###### Michel Abdalla, Manuel Barbosa, Peter B. Rønne, Peter Y.A. Ryan, Petra Sala
ePrint Report
The J-PAKE protocol is a Password Authenticated Key Establishment protocol whose security rests on Diffie-Hellman key establishment and Non-Interactive Zero Knowledge proofs. It has seen widespread deployment and has previously been proven secure, including forward secrecy, in a game-based model. In this paper we show that this earlier proof can be re-cast in the Universal Composability framework, thus yielding a stronger result. We also investigate the extension of such proofs to a significantly more efficient variant of the original J-PAKE, that drops the second round Non-Interactive Zero-Knowledge proofs, that we call sJ-PAKE. Adapting the proofs to this light-weight variant proves highly-non trivial, and requires novel proof strategies and the introduction of the algebraic group model. This means that J-PAKE implementations can be made more efficient by simply deleting parts of the code while retaining security under stronger assumptions. We also investigate the security of two further new variants that combine the efficiency gains of dropping the second round NIZK proofs with the gains achieved by two earlier, lightweight variants: RO-J-PAKE and CRS-J-PAKE. The earlier variants replaced the second Diffie-Hellman terms from each party by either a hash term or a CRS term, thus removing the need for half of the NIZK proofs in the first round. The efficiency and security assumptions of these variants are compared.
###### Daniel Günther, Maurice Heymann, Benny Pinkas, Thomas Schneider
ePrint Report
Multi-Server Private Information Retrieval (PIR) is a cryptographic primitive that allows a client to securely query a database entry from $n \geq 2$ non-colluding servers, which learn no information about the query. Highly efficient PIR could be used for large-scale applications like Compromised Credential Checking (C3) (USENIX Security'19), which allows users to check whether their credentials have been leaked in a data breach. However, state-of-the art PIR schemes are not efficient enough for fast online responses at this scale.

In this work, we introduce Client-Independent Preprocessing (CIP) PIR that moves $\frac{n-1}{n}$ of the online computation to a local preprocessing phase suitable for efficient batch precomputations. The security and online performance of CIP-PIR improve linearly with the number of servers $n$. We show that large-scale applications like C3 with PIR are practical by implementing our CIP-PIR scheme using a parallelized CPU implementation and further accelerating the huge amount of XOR operations with GPUs. To the best of our knowledge, this is the first multi-server PIR scheme whose preprocessing phase is completely independent of the client, and where security and online performance simultaneously increase with the number of servers $n$. In addition, CIP-PIR is the first multi-server PIR scheme that is accelerated by GPUs. It achieves an improvement up to factor $2.1\times$ over our CPU-based implementation. Moreover, a client can access a database entry of a 25 GByte database within less than 1 second.
###### Oriol Farràs, Jordi Ribes-González
ePrint Report
In $1$-out-of-$q$ Oblivious Transfer (OT) protocols, a sender is able to send one of $q\ge 2$ messages to a receiver, all while being oblivious to which message was actually transferred. Moreover, the receiver only learns one of these messages.

Oblivious Transfer combiners take $n$ instances of OT protocols as input, and produce a single protocol that is secure if sufficiently many of the $n$ original OT implementations are secure.

We present a generalization of an OT combiner protocol that was introduced by Cascudo et al. (TCC'17). We show a general $1$-out-of-$q$ OT combiner that is valid for any prime power $q\ge 2$. Our OT combiner is based on secret sharing schemes that are of independent interest.

Our construction achieves the strong notion of perfect security against active $(\mathcal{A},\mathcal{B})$-adversaries. For $q\geq n$, we present a single-use, $n$-server, $1$-out-of-$q$ OT combiner that is perfectly secure against active adversaries that corrupt a minority of servers. The amount of bits exchanged during the protocol is $(q^2+q+1)n\log q$.
###### Alice Pellet-Mary, Damien Stehlé
ePrint Report
The 25 year-old NTRU problem is a mainstream security foundation in public-key cryptography. However, from a reduction perspective, its relative hardness compared to other problems on Euclidean lattices is not well-understood. Its decision version reduces to the search Ring-LWE problem, but this only provides a hardness upper bound. We provide two answers to the long-standing open problem of providing reduction-based evidence of the hardness of the NTRU problem. First, we reduce the worst-case approximate Shortest Vector Problem over ideal lattices to an average-case search variant of the NTRU problem. Second, we reduce another average-case search variant of the NTRU problem to the decision NTRU problem.
###### Marek Broll, Federico Canale, Nicolas David, Antonio Florez-Gutierrez, Gregor Leander, María Naya-Plasencia, Yosuke Todo
ePrint Report
Differential-linear attacks are a cryptanalysis family that has recently benefited from various technical improvements, mainly in the context of ARX constructions. In this paper we push further this refinement, proposing several new improvements. In particular, we develop a better understanding of the related correlations, improve upon the statistics by using the LLR, and finally use ideas from conditional differentials for finding many right pairs. We illustrate the usefulness of these ideas by presenting the first 7.5-round attack on Chaskey. Finally, we present a new competitive attack on 12 rounds of Serpent, and as such the first cryptanalytic progress on Serpent in 10 years.
###### Christof Beierle, Patrick Derbez, Gregor Leander, Gaëtan Leurent, Håvard Raddum, Yann Rotella, David Rupprecht, Lukas Stennes
ePrint Report
This paper presents the first publicly available cryptanalytic attacks on the GEA-1 and GEA-2 algorithms. Instead of providing full 64-bit security, we show that the initial state of GEA-1 can be recovered from as little as 65 bits of known keystream (with at least 24 bits coming from one frame) in time $2^{40}$ GEA-1 evaluations and using 44.5 GiB of memory. The attack on GEA-1 is based on an exceptional interaction of the deployed LFSRs and the key initialization, which is highly unlikely to occur by chance. This unusual pattern indicates that the weakness is intentionally hidden to limit the security level to 40 bit by design.

In contrast, for GEA-2 we did not discover the same intentional weakness. However, using a combination of algebraic techniques and list merging algorithms we are still able to break GEA-2 in time $2^{45.1}$ GEA-2 evaluations. The main practical hurdle is the required knowledge of 1600 bytes of keystream.
###### Hemi Leibowitz, Haitham Ghalwash, Ewa Syta, Amir Herzberg
ePrint Report
In this work, we study Certificate Transparency (CT), an important standardized extension of classical Web-PKI, deployed and integrated into major browsers. We evaluate the properties of the published design of CT-v1 (RFC 6962), and identify five major concerns, which persist in drafts for CT-v2. Most significantly, CT-v1 fails to achieve the main goal of the original CT publications, namely security with No Trusted Third Party (NTTP) and it does not ensure transparency for revocation status. Several recent works address some of these issues but at the cost of significant, non-evolutionary deviation from the existing standards and ecosystem.

In response, we present CTng, a redesign of CT. CTng achieves security, including transparency of certificate and of revocation status, with No Trusted Third Party, while preserving client’s privacy, allowing offline client validation of certificates, and facilitating resiliency to DoS. CTng is efficient and practical, and provides a possible next step in the evolution of PKI standards. We present a security analysis and an evaluation of our experimental open source prototype shows that CTng imposes acceptable communication and storage overhead.
###### Olivier Bronchain, Gaëtan Cassiers, François-Xavier Standaert
ePrint Report
In this note, we describe an attack against the ANSSI Side-Channel Analysis Database (ASCAD), which recovers the full key using the leakage of a single masked block cipher execution. The attack uses a new open-source Side-Channel Analysis Library (SCALib), which allows running the leakage profiling and attacking in less than 5 minutes. It exploits well-known techniques, yet improves significantly over the best known attacks against ASCAD. We conclude by questioning the impact of these experimental findings for side-channel security evaluations.
###### Alexandra Boldyreva, Tianxin Tang
ePrint Report
We study the problem of privacy-preserving approximate kNN search in an outsourced environment — the client sends the encrypted data to an untrusted server and later can perform secure approximate kNN search and updates. We design a security model and propose a generic construction based on locality-sensitive hashing, symmetric encryption, and an oblivious map. The construction provides very strong security guarantees, not only hiding the information about the data, but also the access, query, and volume patterns. We implement, evaluate efficiency, and compare the performance of two concrete schemes based on an oblivious AVL tree and an oblivious BSkiplist.
###### Tim Beyne
ePrint Report
Improved attacks on generic small-domain Feistel ciphers with alternating round tweaks are obtained using linear cryptanalysis. This results in practical distinguishing and message-recovery attacks on the United States format-preserving encryption standard FF3-1 and the South-Korean standards FEA-1 and FEA-2. The data-complexity of the proposed attacks on FF3-1 and FEA-1 is $O(N^{r/2 - 1.5})$, where $N^2$ is the domain size and $r$ is the number of rounds. For example, FF3-1 with $N = 10^3$ can be distinguished from an ideal tweakable block cipher with advantage $\ge 1/10$ using $2^{23}$ encryption queries. Recovering the left half of a message with similar advantage requires $2^{24}$ data. The analysis of FF3-1 serves as an interesting real-world application of (generalized) linear cryptanalysis over the group $\mathbb{Z}/N\mathbb{Z}$.
###### Matthias Fitzi, Chen-Da Liu-Zhang, Julian Loss
ePrint Report
Minimizing the round complexity of Byzantine Agreement (BA) protocols is a fundamental problem in distributed computing. The typical approach to achieve round efficient (randomized) BA is to have a weak form of BA, called graded consensus (GC), followed by a distributed coin, and to repeat this process until some termination condition is met---as introduced by Feldman and Micali (STOC'88).

In this work, we revisit the question of building BA from GC, or, more precisely, from generalizations of GC. Concretely, for Monte Carlo style BA, where the protocol is run for a fixed number of rounds in function of the security parameter (in contrast to protocols with probabilistic termination), we demonstrate that this generalization helps to considerably reduce the round complexity of BA.

In particular, assuming a setup for threshold signatures among the parties and corruption threshold $t<n/3$, we improve over the round complexity of the best known protocol by a factor of $1/2$, asymptotically; this is achieved by applying one single Feldman-Micali iteration consisting of one (generalized) GC instance and one round of coin tossing.

Our technique also applies to the dishonest-minority case ($t<n/2$), yielding an improvement by a factor of $1/4$ (asymptotically) over the round complexity of the best known fixed-round protocol.
###### Frank Byszio, Dr. Klaus-Dieter Wirth, Dr. Kim Nguyen
ePrint Report
Intelligent Composed Algorithms (ICA) have been developed as a mechanism for introducing new cryptographic algorithms into applications and PKIs. Using ICAs, known cryptographic algorithms (Component-Algorithms) can be combined in order to obtain a stronger mix of cryptographic algorithms or primitives. Using ICAs it is also possible to use known Component-Algorithms as mutual alternatives. Furthermore, the combined and alternative use of Component-Algorithms as ICAs shall enable agile use of cryptographic algorithms without having to change standards as X.509 or CMS. An Intelligent Composed Algorithm is a flexible group of cryptographic algorithms together with the corresponding rules for their combination. The rules for the combination of Component-Algorithms are defined as algorithms (Controlling-Algorithms) themselves. In applications, ICAs are used as conventional algorithms, described by an algorithm identifier (an OID) and matching parameters. The chosen Component-Algorithms are defined by parameters of the Controlling-Algorithm. The use of ICAs impose no need to modify higher-order standards for applications and protocols, as X.509, RFC 5280, RFC 6960, RFC 2986, RFC 4210, and RFC 5652.
###### Elena Pagnin, Gunnar Gunnarsson, Pedram Talebi, Claudio Orlandi, Andrei Sabelfeld:
ePrint Report
Ridesharing is revolutionizing the transportation industry in many countries. Yet, the state of the art is based on heavily centralized services and platforms, where the service providers have full possession of the users’ location data. Recently, researchers have started addressing the challenge of enabling privacy-preserving ridesharing. The initial proposals, however, have shortcomings, as some rely on a central party, some incur high performance penalties, and most do not consider time preferences for ridesharing. TOPPool encompasses ridesharing based on the proximity of end-points of a ride as well as partial itinerary overlaps. To achieve the latter, we propose a simple yet powerful reduction to a private set intersection on trips represented as sets of consecutive road segments. We show that TOPPool includes time preferences while preserving privacy and without relying on a third party. We evaluate our approach on real-world data from the New York’s Taxi & Limousine Commission. Our experiments demonstrate that TOPPool is superior in performance over the prior work: our intersection-based itinerary matching runs in less than 0.3 seconds for reasonable trip length, in contrast, on the same set of trips prior work takes up to 10 hours.