International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

10 October 2022

Wenwen Xia, Leizhang Wang, Geng Wang, Dawu Gu, Baocang Wang, Leizhang Wang, GengWang, Dawu Gu, Baocang Wang
ePrint Report ePrint Report
BKZ is currently the most efficient algorithm in solving the approximate shortest vector problem (SVP$_\gamma$). One of the most important parameter choice in BKZ is the blocksize $\beta$, which greatly affects its efficiency. In 2016, Aono \textit{et al.} presented \textit{Improved Progressive BKZ} (pro-BKZ). Their work designed a blocksize strategy selection algorithm so that pro-BKZ runs faster than BKZ 2.0 which has a fixed blocksize. However, pro-BKZ only considers enumeration as its subroutine, without using the more efficient lattice sieving algorithm. Besides, their blocksize strategy selection is not optimal, so the strategy selection algorithm could further be improved. In this paper, we present a new lattice solving algorithm called Improved Progressive pnj-BKZ (pro-pnj-BKZ) mainly based on an optimal blocksize strategy selection algorithm for BKZ with sieving, which relies on accurate time cost models and simulating algorithms. We propose the following approaches: - New simulators and time cost models for sieving and BKZ with sieving. A simulator is used for simulating lattice reduction process without running the lattice reduction algorithm itself. We give new simulators for sieving and BKZ, to simulate the cases where blocks in BKZ with sieve oracle jump by more than one dimension. We also give more accurate time cost models for both sieving and BKZ with sieving by experiments. Specifically, we discover new relationships among time cost, blocksize and lattice dimension, which cannot be explained by the existing theoretical results, and discuss the reason. - New two-step mode for solving SVP$_\gamma$ problem with BKZ and sieving. Other than a subroutine of BKZ, sieving can also be combined with BKZ to get a more efficient lattice solving algorithm, but the best way of combination is currently unknown. We show that to solve SVP$_\gamma$ problem more efficiently, one should first repeatedly run BKZ to reduce the lattice basis and finally run lattice sieving once, since BKZ performs better in lattice basis reduction, while sieving performs better in finding short vectors. By our simulator, we can properly choose the timing where the algorithm ends the BKZ routine and begins sieving. - New blocksize strategy selection algorithm for BKZ with sieving. Since the blocksize strategy selection algorithm in pro-BKZ is not optimal, we design a new blocksize strategy selection algorithm to ensure an optimal strategy output. We implement both blocksize strategy selection algorithms in pro-BKZ and ours, along with our new BKZ simulator and two-step mode to generate the blocksize strategies. Simulation results show that the strategy generated by our new algorithm are more efficient in solving SVP$_\gamma$ problem. By generated strategy, we improve the efficiency nearly $24.6$ times compared with heuristic blocksize strategy in G6K. We test the efficiency of the pro-pnj-BKZ with the TU Darmstadt LWE challenge and break the LWE challenges with $(n,\alpha)\in\{(40,0.035),(50,0.025),(55,0.020),(90,0.005)\}$.
Expand
Ritam Bhaumik, André Chailloux, Paul Frixons, María Naya Plasencia
ePrint Report ePrint Report
In order to maintain a similar security level in a post-quantum setting, many symmetric primitives should have to double their keys and increase their state sizes. So far, no generic way for doing this is known that would provide convincing quantum security guarantees. In this paper we propose a new generic construction that allows to double the key and the state size of a block cipher. For this we have modified the ECB-Mix-ECB (EME) construction, as we have been able to mount a new type of superposition attack on EME, and we provide several classical and quantum security arguments and analyses for our new construction QuEME. We propose a concrete instantiation of this construction with variants of AES-128.
Expand
Ward Beullens, Gregor Seiler
ePrint Report ePrint Report
The most compact quantum-safe proof systems for large circuits are PCP-type systems such as Ligero, Aurora, and Shockwave, that only use weak cryptographic assumptions, namely hash functions modeled as random oracles. One would expect that by allowing for stronger assumptions, such as the hardness of Module-SIS, it should be possible to design more compact proof systems. But alas, despite considerable progress in lattice-based proofs, no such proof system was known so far. We rectify this situation by introducing a Lattice-Based Recursively Amortized Demonstration Of R1CS (LaBRADOR), with more compact proof sizes than known hash-based proof systems, both asymptotically and concretely for all relevant circuit sizes. LaBRADOR proves knowledge of a solution for an R1CS mod $2^{64}+1$ with $2^{20}$ constraints, with a proof size of only 58 KB, an order of magnitude more compact than previous quantum-safe proofs.
Expand
Bart Mennink
ePrint Report ePrint Report
At SAC 2011, Bertoni et al. introduced the keyed duplex construction as a tool to build permutation based authenticated encryption schemes. The construction was generalized to full-state absorption by Mennink et al. (ASIACRYPT 2015). Daemen et al. (ASIACRYPT 2017) generalized it further to cover much more use cases, and proved security of this general construction, and Dobraunig and Mennink (ASIACRYPT 2019) derived a leakage resilience security bound for this construction. Due to its generality, the full-state keyed duplex construction that we know today has plethora applications, but the flip side of the coin is that the general construction is hard to grasp and the corresponding security bounds are very complex. Consequently, the state-of-the-art results on the full-state keyed duplex construction are not used to the fullest. In this work, we revisit the history of the duplex construction, give a comprehensive discussion of its possibilities and limitations, and demonstrate how the two security bounds (of Daemen et al. and Dobraunig and Mennink) can be interpreted in particular applications of the duplex.
Expand
Huanhuan Chen, Yao Jiang Galteland, Kaitai Liang
ePrint Report ePrint Report
We define a stronger confidentiality notion for the ciphertext-dependent updatable encryption. The new notion captures the adaptive security that was not covered in prior works (CRYPTO2017, ASIACRYPT 2020), but also supports both deterministic and randomized constructions. We revise the public key encryption introduced by Micciancio et al. (EUROCRYPT 2012) to a simpler scheme using the lattice trapdoor techniques. Based on the resulting scheme, we further propose a new updatable encryption construction that achieves the new notion under the Learning with Errors assumption.
Expand
Sebastian Ramacher, Daniel Slamanig, Andreas Weninger
ePrint Report ePrint Report
Authenticated key-exchange (AKE) protocols are an important class of protocols that allow two parties to establish a common session key over an insecure channel such as the Internet to then protect their communication. They are widely deployed in security protocols such as TLS, IPsec and SSH. Besides the confidentiality of the communicated data, an orthogonal but increasingly important goal is the protection of the confidentiality of the identities of the involved parties (aka privacy). For instance, the Encrypted Client Hello (ECH) mechanism for TLS 1.3 has been designed for exactly this reason. Recently, a series of works (Zhao CCS'16, Arfaoui et al. PoPETS'19, Schäge et al. PKC'20) studied privacy guarantees of (existing) AKE protocols by integrating privacy into AKE models. We observe that these so called privacy-preserving AKE (PPAKE) models are typically strongly tailored to the specific setting, i.e., concrete protocols they investigate. Moreover, the privacy guarantees in these models might be too weak (or even are non-existent) when facing active adversaries.

In this work we set the goal to provide a single PPAKE model that captures privacy guarantees against different types of attacks, thereby covering previously proposed notions as well as so far not achieved privacy guarantees. In doing so, we obtain different "degrees" of privacy within a single model, which, in its strongest forms also capture privacy guarantees against powerful active adversaries. We then proceed to investigate (generic) constructions of AKE protocols that provide strong privacy guarantees in our PPAKE model. This includes classical Diffie-Hellman type protocols as well as protocols based on generic building blocks, thus covering post-quantum instantiations.
Expand
Timo Glaser, Alexander May
ePrint Report ePrint Report
In the Learning with Errors (LWE) problem we are given a matrix $A \in \mathbb{Z}_q^{N \times N}$ and a target vector $\vec{t} \in \mathbb{Z}_q^N$ such that there exists small-norm $\vec{s}, \vec{e}\in \mathbb{Z}_q^N$ satisfying $A\cdot\vec{s} = \vec{t} + \vec{e} \bmod q$. Modern cryptosystems often sample $\vec{s}, \vec{e}$ from narrow distributions that take integer values in a small range $[-\eta, \eta].$ Kyber and Dilithium both choose $\eta=2$ and $\eta=3$ using either a Centered Binomial distribution (Kyber), or a uniform distribution (Dilithium). In this work, we address the fundamental question how hard the enumeration of LWE secret keys for narrow distributions with $\eta \leq 3$ is. At Crypto 21, May proposed a representation-based algorithm for enumerating ternary keys, i.e. the case $\eta = 1$, with a fixed number of $\pm 1$ entries. In this work, we extend May's algorithm in several ways. First, we show how to deal with keys sampled from a probability distribution as in many modern systems like Kyber and Dilithium, rather than with keys having a fixed number of entries. Second, we generalize to larger values $\eta= 2, 3$, thereby achieving asymptotic key guess complexities that are not far off from lattice estimates. E.g. for Kyber's Centered Binomial distribution we achieve heuristic time/memory complexities of $\mathcal{O}(2^{0.36N})$ for $\eta=2$, and $\mathcal{O}(2^{0.37N})$ for $\eta=3$. For Dilithium's uniform distribution we achieve heuristic complexity $\mathcal{O}(2^{0.38N})$ for $\eta=2$. Let $S$ be the Shannon entropy of Kyber/Dilithium keys. Then our algorithms runs in time about ${S}^{\frac 16}$, which greatly improves over the standard combinatorial Meet-in-the-Middle attack with complexity ${S}^{\frac 1 2}$. Our results also compare well to current lattice asymptotics of $2^{0.29 \beta}$, where the lattice parameter $\beta$ is roughly of size $\frac 4 5 N$. Thus, our analysis supports that Kyber secret keys are indeed hard to enumerate. Yet, we find it remarkable that a purely combinatorial key search is almost competitive with highly evolved lattice sieving techniques.
Expand
Tomoyuki Morimae, Takashi Yamakawa
ePrint Report ePrint Report
The existence of one-way functions is one of the most fundamental assumptions in classical cryptography. In the quantum world, on the other hand, there are evidences that some cryptographic primitives can exist even if one-way functions do not exist [Morimae and Yamakawa, CRYPTO 2022; Ananth, Qian, and Yuen, CRYPTO 2022]. We therefore have the following important open problem in quantum cryptography: What is the most fundamental element in quantum cryptography? In this direction, Brakerski, Canetti, and Qian [arXiv:2209.04101] recently defined a notion called EFI pairs, which are pairs of efficiently generatable states that are statistically distinguishable but computationally indistinguishable, and showed its equivalence with some cryptographic primitives including commitments, oblivious transfer, and general multi-party computations. However, their work focuses on decision-type primitives and does not cover search-type primitives like quantum money and digital signatures. In this paper, we study properties of one-way state generators (OWSGs), which are a quantum analogue of one-way functions proposed by Morimae and Yamakawa. We first revisit the definition of OWSGs and generalize it by allowing mixed output states. Then we show the following results.

(1) We define a weaker version of OWSGs, which we call weak OWSGs, and show that they are equivalent to OWSGs. It is a quantum analogue of the amplification theorem for classical weak one-way functions.

(2) (Bounded-time-secure) quantum digital signatures with quantum public keys are equivalent to OWSGs.

(3) Private-key quantum money schemes (with pure money states) imply OWSGs.

(4) Quantum pseudo one-time pad schemes imply both OWSGs and EFI pairs. For EFI pairs, single-copy security suffices.

(5) We introduce an incomparable variant of OWSGs, which we call secretly-verifiable and statistically-invertible OWSGs, and show that they are equivalent to EFI pairs.
Expand
Kai Hu, Thomas Peyrin
ePrint Report ePrint Report
The higher-order differential-linear (HDL) attack was studied for the first time by Biham, Dunkelman, and Keller at FSE 2005, where a linear approximation is appended to a higher-order differential (HD) transition. It is a natural generalization of the differential-linear (DL) attack. Due to some restrictions in practical usage, unfortunately, the HDL cryptanalysis has attracted much less attention compared to its DL counterpart since its proposal. Inspired by the algebraic perspective on DL attacks recently proposed at CRYPTO 2021, in this paper we show that the HDL attack can be made much more practical with an algebraic treatment, turning this 17-year-old attack into another go-to tool for cryptanalysts.

Unsurprisingly, HD/HDL attacks have the potential to be more effective than their simpler differential/DL counterpart. We provide three novel methods to detect possible HD/HDL distinguishers, including: (a) an estimation of the algebraic degree of the differential supporting function (DSF); (b) the higher-order algebraic transitional form (HATF); (c) experimental methods based on cube testers. With these methods, we greatly improve the distinguishing attacks on the 8-round Ascon permutation under the black-box model from $2^{130}$ to $2^{46}$. Also, we give a new zero-sum distinguisher for a full 12-round Ascon permutation with only $2^{55}$ time/data complexity, improving over the previous best one that requires $2^{130}$ calls (we make clear that this does not impact the full Ascon AEAD scheme). For the 4-round Ascon initialization, a deterministic 2nd order HDL distinguisher is proposed with only four nonces. Besides the distinguishers, the HATF technique allows us to handle the probabilistic HD/HDL properties of cryptographic primitives. This leads to a conditional HDL attack on 5-round Ascon initialization that can recover all the key bits, and performing 8 times faster than the conditional DL attack. To the best of our knowledge, this is the first theoretical work to propose a probabilistic HDL attack since it was first published.All our attacks in this paper apply to both Ascon-128 and Ascon-128a. We also give a conditional HD approximation for 130-round Grain v1 (reaching 5 more rounds than the previous best conditional differential approximation) and new 4-round deterministic HDL distinguishers for the Xoodoo permutation with only four chosen plaintexts. Finally, we applied our strategy to the ARX-based cipher ChaCha, obtaining 3.5-, 4- and 4.5-round distinguishers and again improving over the state-of-the-art. Our cryptanalyses do not threaten the security of the ciphers mentioned in this paper.
Expand
Trey Li
ePrint Report ePrint Report
We propose a new identification scheme and a new signature scheme from the multiple modular subset product with errors problem; as well as a new identification scheme and a new signature scheme from the multiple modular subset sum with errors problem.
Expand
Sajin Sasy, Aaron Johnson, Ian Goldberg
ePrint Report ePrint Report
Several privacy-preserving analytics frameworks have been proposed that use trusted execution environments (TEEs) like Intel SGX. Such frameworks often use compaction and shuffling as core primitives. However, due to advances in TEE side-channel attacks, these primitives, and the applications that use them, should be _fully oblivious_; that is, perform instruction sequences and memory accesses that do not depend on the secret inputs. Such obliviousness would eliminate the threat of leaking private information through memory or timing side channels, but achieving it naively can result in a significant performance cost.

In this work, we present fast, fully oblivious algorithms for compaction and shuffling. We implement and evaluate our designs to show that they are practical and outperform the state of the art. Our oblivious compaction algorithm, ORCompact, is always faster than the best alternative and can yield up to a 5x performance improvement. For oblivious shuffling, we provide two novel algorithms: ORShuffle and BORPStream. ORShuffle outperforms prior fully oblivious shuffles in all experiments, and it provides the largest speed increases—up to 1.8x—when shuffling a large number of small items. BORPStream outperforms all other algorithms when shuffling a large number of large items, with a speedup of up to 1.4x in such cases. It can obtain even larger performance improvements in application settings where the items to shuffle arrive incrementally over time, obtaining a speedup of as much as 4.2x. We additionally give parallel versions of all of our algorithms, prove that they have low parallel step complexity, and experimentally show a 5–6x speedup on an 8-core processor.

Finally, ours is the first work with the explicit goal of ensuring full obliviousness of complex functionalities down to the implementation level. To this end, we design Fully Oblivious Assembly Verifier (FOAV), a tool that verifies the binary has no secret-dependent conditional branches.
Expand
Nikolaos Makriyannis
ePrint Report ePrint Report
The classic MPC protocol for Schnorr Signatures (Classic Schnorr) consists of a simple three-round process for the signing operation, and the protocol is essentially as efficient as the underlying non-MPC scheme (modulo the round-complexity). In particular, Classic Schnorr does not contain any ZK proofs, not even for key-generation, and the only cryptographic “machinery” it uses is the underlying hash function. In this paper, we show that Classic Schnorr UC realizes the ideal threshold-signature functionality of Canetti, Makriyannis, and Peled (Manuscript’20) against adaptive adversaries for any number of corrupted parties. Furthermore, (1) the protocol does not impose any restrictions on the number of concurrent signings, (2) the protocol naturally supports identifiable abort, and (3) the protocol can be extended to achieve proactive security, almost for free. So, the main novelty of our work is showing that Classic Schnorr achieves the utmost security as a threshold-signatures protocol. We hold that the achieved security is truly surprising given how simple the protocol is.

On a technical level, we show the above by extending the proof technique of Canetti, Makriyannis, and Peled, recently generalized by Blokh, Makriyannis, and Peled (Manuscript’22) for arbitrary threshold-signature schemes, whereby the indistinguishability of the UC simulation is reduced to the unforgeability of the underlying signature scheme. Our results hold in the random oracle model under the discrete logarithm assumption.
Expand
Dario Catalano, Dario Fiore, Ida Tucker
ePrint Report ePrint Report
Functional Commitments (FC) allow one to reveal functions of committed data in a succinct and verifiable way. In this paper we put forward the notion of additive-homomorphic FC and show two efficient, pairing-based, realizations of this primitive supporting multivariate polynomials of constant degree and monotone span programs, respectively. We also show applications of the new primitive in the contexts of homomorphic signatures: we show that additive-homomorphic FCs can be used to realize homomorphic signatures (supporting the same class of functionalities as the underlying FC) in a simple and elegant way. Using our new FCs as underlying building blocks, this leads to the (seemingly) first expressive realizations of multi-input homomorphic signatures not relying on lattices or multilinear maps.
Expand
Lei Bi, Xianhui Lu, Junjie Luo, Kunpeng Wang
ePrint Report ePrint Report
The Learning with Errors (LWE) problem is one of the most prominent problems in lattice-based cryptography. Many practical LWE-based schemes, including Fully Homomorphic encryption (FHE), use sparse ternary secret for the sake of efficiency. Several (hybrid) attacks have been proposed that benefit from such sparseness, thus researchers believe the security of the schemes with sparse ternary secrets is not well-understood yet. Recently, May [Crypto 2021] proposed an efficient meet-in-the-middle attack named Meet-LWE for LWE with ternary se- cret, which significantly improves Odlyzko’s algorithm. In this work, we generalize May’s Meet-LWE and then introduce a new hybrid attack which combines Meet-LWE with lattice dual attack. We implement our algorithm to FHE-type parameters of LWE problem and compare it with the previous hybrid dual attacks. The result shows that our attack outperforms other attacks in a large range of parameters. We note that our attack has no impact on the LWE-based schemes in the PQC Standardization held by NIST as their secrets are not sparse and/or ternary.
Expand
Andre Esser, Floyd Zweydinger
ePrint Report ePrint Report
We propose new time-memory trade-offs for the random subset sum problem defined on $(a_1,\ldots,a_n,t)$ over $\mathbb{Z}_{2^n}$.

Our trade-offs yield significant running time improvements for every fixed memory limit $M\geq2^{0.091n}$. Furthermore, we interpolate to the running times of the fastest known algorithms when memory is not limited. Technically, our design introduces a pruning strategy to the construction by Becker-Coron-Joux (BCJ) that allows for an exponentially small success probability. We compensate for this reduced probability by multiple randomized executions. Our main improvement stems from the clever reuse of parts of the computation in subsequent executions to reduce the time complexity per iteration.

As an application of our construction, we derive the first non-trivial time-memory trade-offs for Information Set Decoding (ISD) algorithms. Our new algorithms improve on previous (implicit) trade-offs asymptotically as well as practically. Moreover, our optimized implementation also improves on running time, due to reduced memory access costs. We demonstrate this by obtaining a new record computation in decoding quasi-cyclic codes (QC-3138). Using our newly obtained data points we then extrapolate the hardness of suggested parameter sets for the NIST PQC fourth round candidates McEliece, BIKE and HQC, lowering previous estimates by up to 6 bits and further increasing their reliability.
Expand
Andre Esser
ePrint Report ePrint Report
The syndrome decoding problem lies at the heart of code-based cryptographic constructions. Information Set Decoding (ISD) algorithms are commonly used to assess the security of these systems. The most efficient ISD algorithms rely heavily on nearest neighbor search techniques. However, the runtime result of the fastest known ISD algorithm by Both-May (PQCrypto '17) was recently challenged by Carrier et al. (Asiacrypt '22), which introduce themselves a new technique called RLPN decoding which yields improvements over ISD for codes with small rates $\frac{k}{n}\leq 0.3$.

In this work we first revisit the Both-May algorithm, by giving a clean exposition and a corrected analysis. We confirm the claim by Carrier et al. that the initial analysis is flawed. However, we find that the algorithm still (slightly) improves on time complexity and significantly improves on memory complexity over previous algorithms. Our first main contribution is therefore to set the correct baseline for further improvements.

As a second main contribution we then show how to improve on the Both-May algorithm, lowering the worst case running time in the full distance decoding setting to $2^{0.0948n}$. We obtain even higher time complexity gains for small rates, shifting the break even point with RLPN decoding to rate $\frac{k}{n}=0.25$. Moreover, we significantly improve on memory for all rates $\frac{k}{n}<0.5$. We obtain our improvement by introducing a novel technique to combine the list construction step and the list filtering step commonly applied by ISD algorithms. Therefore we treat the nearest neighbor routine in a non-blackbox fashion which allows us to embed the filtering into the nearest neighbor search. In this context we introduce the fixed-weight nearest neighbor problem, and propose a first algorithm to solve this problem. Besides resulting in an improved decoding algorithm this opens the direction for further improvements, since any faster algorithm for solving this fixed-weight nearest neighbor variant is likely to lead to further improvements of our ISD algorithm.
Expand
Trey Li
ePrint Report ePrint Report
We give a new post-quantum public key cryptosystem from the multiple modular subset product with errors problem.
Expand
Divesh Aggarwal, Marshall Ball, Maciej Obremski
ePrint Report ePrint Report
Non-malleable codes are a natural relaxation of error correction and error detection codes applicable in scenarios where error-correction or error-detection is impossible.

Over the last decade, non-malleable codes have been studied for a wide variety of tampering families. Among the most well studied of these is the split-state family of tampering channels, where the codeword is split into two or more parts and each part is tampered independently.

We survey various constructions and applications of non-malleable codes in the split-state model.
Expand

07 October 2022

Coinbase
Job Posting Job Posting

We have a senior/staff cryptography researcher position available in the Blockchain Security team at Coinbase.

Full description available here:

  • Senior role: https://www.coinbase.com/careers/positions/4582729
  • Staff role: https://www.coinbase.com/careers/positions/4586906

Please reach out to me at [s] [last name] @pm.me if you have any questions.

Closing date for applications:

Contact: Shashank Agrawal

More information: https://www.coinbase.com/careers/positions/4582729

Expand
Aarhus University, Department of Computer Science, Denmark
Job Posting Job Posting
Aarhus University, Denmark - an international top-100 University - has made an ambitious strategic investment in a recruitment plan to radically expand the Department of Computer Science. For this call, we encourage applicants that can contribute to our basic and strategic research in Cryptography and Security. Therefore, the department invites applications from candidates that are driven by excellence in research and teaching as well as external collaboration on societal challenges. The successful candidate will have the opportunity to contribute to and shape the research and teaching as well as the industrial and societal collaboration associated with the department expansion. The position is open from February 1st, 2023. The department has research groups within "Algorithms, Data Structures and Foundations for Machine Learning", Data- Intensive Systems", “Computational Complexity and Game Theory”, "Cryptography and Security", "Logic and Semantics", "Ubiquitous Computing and Interaction", “Human Computer Interaction” and "Programming Languages”. This call is aimed at strengthening and expanding our strategic development of excellent research in Cryptography and Security including, for instance, public-key cryptography, cryptographic protocols, secure computation, post-quantum cryptography and quantum cryptography. Applicants for Full professor positions are expected to meet the criteria for full professors. All applicants are requested to include a link to their Google Scholar profile in their application. Your application must be uploaded together with letters of recommendation. As department we wish to build a computer science research and study environment with equality and diversity as a core value for recruitment as well as for daily study and work life. If you have visions for or experience with activities or initiatives to support such a core value in a computer science context, we encourage you to describe them in your application.

Closing date for applications:

Contact: Head of Department, Professor Kaj Grønbæk kgronbak@cs.au.dk.

More information: https://www.au.dk/om/stillinger/job/full-professorship-in-cryptography-and-security-at-computer-science-aarhus-university

Expand
◄ Previous Next ►