International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

13 October 2023

Zeyuan Yin, Bingsheng Zhang, Andrii Nastenko, Roman Oliynykov, Kui Ren
ePrint Report ePrint Report
Typically, a decentralized collaborative blockchain decision-making mechanism is realized by remote voting. To date, a number of blockchain voting schemes have been proposed; however, to the best of our knowledge, none of these schemes achieve coercion-resistance. In particular, for most blockchain voting schemes, the randomness used by the voting client can be viewed as a witness/proof of the actual vote, which enables improper behaviors such as coercion and vote-buying. Unfortunately, the existing coercion-resistant voting schemes cannot be directly adopted in the blockchain context. In this work, we design the first scalable coercion-resistant blockchain decision-making scheme that supports private differential voting power and 1-layer liquid democracy as introduced by Zhang et al. (NDSS'19). Its overall complexity is $O(n)$, where $n$ is the number of voters. Moreover, the ballot size is reduced from Zhang et al.'s $\Theta(m)$ to $\Theta(1)$, where $m$ is the number of experts and/or candidates. Its incoercibility is formally proven under the UC incoercibility framework by Alwen et al. (Crypto'15). We implement a prototype of the scheme and the evaluation result shows that our scheme's tally execution time is more than 6x faster than VoteAgain (USENIX'20) in an election with over 10,000 voters and over 50\% extra ballot rate.
Expand
Léo Ducas, Andre Esser, Simona Etinski, Elena Kirshanova
ePrint Report ePrint Report
A recent work by Guo, Johansson, and Nguyen (Eprint'23) proposes a promising adaptation of Sieving techniques from lattices to codes, in particular, by claiming concrete cryptanalytic improvements on various schemes. The core of their algorithm reduces to a Near Neighbor Search (NNS) problem, for which they devise an ad-hoc approach. In this work, we aim for a better theoretical understanding of this approach. First, we provide an asymptotic analysis which is not present in the original paper. Second, we propose a more systematic use of well-established NNS machinery, known as Locality Sensitive Hashing and Filtering (LSH/F). LSH/F is an approach that has been applied very successfully in the case of sieving over lattices. We thus establish the first baseline for the sieving approach with a decoding complexity of $2^{0.117n}$ for the conventional worst parameters (full distance decoding, where complexity is maximized over all code rates). Our cumulative improvements eventually enable us to lower the hardest parameter decoding complexity for SievingISD algorithms to $2^{0.101n}$. This approach outperforms the BJMM algorithm (Eurocrypt'12) but falls behind the most advanced conventional ISD approach by Both and May (PQCrypto'18). As for lattices, we found the Random-Spherical-Code-Product (RPC) to give the best asymptotic complexity. Moreover, we also consider an alternative that seems specific to the Hamming Sphere, which we believe could be of practical interest as it plausibly hides less sub-exponential overheads than RPC.
Expand
Bruno Sterner
ePrint Report ePrint Report
We give a new approach for finding large twin smooth integers. Those twins whose sum is a prime are of interest in the parameter setup of certain isogeny-based cryptosystems such as SQISign. The approach to find such twins is to find two polynomials in $\Q[x]$ that split into a product of small degree factors and differ by $1$; then evaluate them on a particular smooth integer. This was first explored by Costello, Meyer and Naehrig at EUROCRYPT'21 using polynomials that split completely into linear factors which were found from some Diophantine number theory. The polynomials used in this work split into mostly linear factors with the exception of a few quadratic factors. Some of these linear factors are repeated and so the overall smoothness probability is either better or comparable to that of the prior polynomials. We utilise these polynomials to search for large twin smooth integers whose sum is prime. In particular, the smoothness bound of the $384$ and $512$-bit instances that we find are significantly smaller than those found in EUROCRYPT'21.
Expand
Panagiotis Chatzigiannis, Konstantinos Chalkias, Aniket Kate, Easwar Vivek Mangipudi, Mohsen Minaei, Mainack Mondal
ePrint Report ePrint Report
Account recovery enables users to regain access to their accounts when they lose their authentication credentials. While account recovery is well established and extensively studied in the Web2 (traditional web) context, Web3 account recovery presents unique challenges. In Web3, accounts rely on a (cryptographically secure) private-public key pair as their credential, which is not expected to be shared with a single entity like a server owing to security concerns. This makes account recovery in the Web3 world distinct from the Web2 landscape, often proving to be challenging or even impossible. As account recovery has proven crucial for Web2 authenticated systems, various solutions have emerged to address account recovery in the Web3 blockchain ecosystem in order to make it more friendly and accessible to everyday users, without "punishing" users if they make honest mistakes. This study systematically examines existing account recovery solutions within the blockchain realm, delving into their workflows, underlying cryptographic mechanisms, and distinct characteristics. After highlighting the trilemma between usability, security, and availability encountered in the Web3 recovery setting, we systematize the existing recovery mechanisms across several axes which showcase those tradeoffs. Based on our findings, we provide a number of insights and future research directions in this field.
Expand
Ashrujit Ghoshal, Mingxun Zhou, Elaine Shi
ePrint Report ePrint Report
Classically, Private Information Retrieval (PIR) was studied in a setting without any pre-processing. In this setting, it is well-known that 1) public-key cryptography is necessary to achieve non-trivial (i.e., sublinear) communication efficiency in the single-server setting, and 2) the total server computation per query must be linear in the size of the database, no matter in the single-server or multi-server setting. Recent works have shown that both of these barriers can be overcome if we are willing to introduce a pre-processing phase. In particular, a recent work called Piano showed that using only one-way functions, one can construct a single-server preprocessing PIR with $\widetilde{O}(\sqrt{n})$ bandwidth and computation per query, assuming $\widetilde{O}(\sqrt{n})$ client storage. For the two-server setting, the state-of-the-art is defined by two incomparable results. First, Piano immediately implies a scheme in the two-server setting with the same performance bounds as stated above. Moreover, Beimel et al. showed a two-server scheme with $O(n^{1/3})$ bandwidth and $O(n/\log^2 n)$ computation per query, and one with $O(n^{1/2 + \epsilon})$ cost both in bandwidth and computation -- both schemes provide information theoretic security.

In this paper, we show that assuming the existence of one-way functions, we can construct a two-server preprocessing PIR scheme with $\widetilde{O}(n^{1/4})$ bandwidth and $\widetilde{O}(n^{1/2})$ computation per query, while requiring only $\widetilde{O}(n^{1/2})$ client storage. We also construct a new single-server preprocessing PIR scheme with $\widetilde{O}(n^{1/4})$ online bandwidth and $\widetilde{O}(n^{1/2})$ offline bandwidth and computation per query, also requiring $\widetilde{O}(n^{1/2})$ client storage. Specifically, the online bandwidth is the bandwidth required for the client to obtain an answer, and the offline bandwidth can be viewed as background maintenance work amortized to each query. Our new constructions not only advance the theoretical understanding of preprocessing PIR, but are also concretely efficient because the only cryptography needed is pseudorandom functions.
Expand
Thibauld Feneuil, Matthieu Rivain
ePrint Report ePrint Report
The MPC-in-the-Head paradigm is instrumental in building zero-knowledge proof systems and post-quantum signatures using techniques from secure multi-party computation. Many recent works have improved the efficiency of this paradigm. In this work, we improve the recently proposed framework of MPC-in-the-Head based on threshold secret sharing (to appear at Asiacrypt 2023), here called Threshold Computation in the Head. We first address the two main limitations of this framework, namely the degradation of the communication cost and the constraint on the number of parties. Our tweak of this framework makes it applicable to the previous MPCitH schemes (and in particular post-quantum signature candidates recently submitted to NIST) for which we obtain up to 50% timing improvements without degrading the signature size. Then we extend the TCitH framework to support quadratic (or higher degree) MPC round functions instead of being limited to linear functions as in the original framework. We show the benefits of our extended framework with several applications. We first propose a generic proof system for polynomial constraints that outperforms the former MPCitH-based schemes for proving low-degree arithmetic circuits. Then we apply our extended framework to derive improved variants of the MPCitH candidates submitted to NIST. For most of them, we save between 9% and 35% of the signature size. In particular, we obtain 4.2 KB signatures based on the (non-structured) MQ problem. Finally, we propose a generic way to build efficient post-quantum ring signatures from any one-way function. When applying our TCitH framework to this design with the MQ problem, the obtained scheme outperforms all the previous proposals in the state of the art. For instance, our scheme achieves sizes below 6 KB and timings around 10 ms for a ring of 4000 users.
Expand
Alexander Wagner, Vera Wesselkamp, Felix Oberhansl, Marc Schink, Emanuele Strieder
ePrint Report ePrint Report
Hash-based signature (HBS) schemes are an efficient method of guaranteeing the authenticity of data in a post-quantum world. The stateful schemes LMS and XMSS and the stateless scheme SPHINCS+ are already standardised or will be in the near future. The Winternitz one-time signature (WOTS) scheme is one of the fundamental building blocks used in all these HBS standardisation proposals. We present a new fault injection attack targeting WOTS that allows an adversary to forge signatures for arbitrary messages. The attack affects both the signing and verification processes of all current stateful and stateless schemes. Our attack renders the checksum calculation within WOTS useless. A successful fault injection allows at least an existential forgery attack and, in more advanced settings, a universal forgery attack. While checksum computation is clearly a critical point in WOTS, and thus in any of the relevant HBS schemes, its resilience against a fault attack has never been considered. To fill this gap, we theoretically explain the attack, estimate its practicability, and derive the brute-force complexity to achieve signature forgery for a variety of parameter sets. We analyse the reference implementations of LMS, XMSS and SPHINCS+ and pinpoint the vulnerable points. To harden these implementations, we propose countermeasures and evaluate their effectiveness and efficiency. Our work shows that exposed devices running signature generation or verification with any of these three schemes must have countermeasures in place.
Expand
Hao Fan, Yonglin Hao, Qingju Wang, Xinxin Gong, Lin Jiao
ePrint Report ePrint Report
In cube attacks, key filtering is a basic step of identifying the correct key candidates by referring to the truth tables of superpolies. When terms of superpolies get massive, the truth table lookup complexity of key filtering increases significantly. In this paper, we propose the concept of implementation dependency dividing all cube attacks into two categories: implementation dependent and implementation independent. The implementation dependent cube attacks can only be feasible when the assumption that one encryption oracle query is more complicated than one table lookup holds. On the contrary, implementation independent cube attacks remain feasible in the extreme case where encryption oracles are implemented in the full codebook manner making one encryption query equivalent to one table lookup. From this point of view, we scrutinize existing cube attack results of stream ciphers Trivium, Grain-128AEAD, Acorn and Kreyvium. As a result, many of them turn out to be implementation dependent. Combining with the degree evaluation and divide-and-conquer techniques used for superpoly recovery, we further propose new cube attack results on Kreyvium reduced to 898, 899 and 900 rounds. Such new results not only mount to the maximal number of rounds so far but also are implementation independent.
Expand
Nils Fleischhacker, Mathias Hall-Andersen, Mark Simkin, Benedikt Wagner
ePrint Report ePrint Report
In proof-of-stake blockchains, liveness is ensured by repeatedly selecting random groups of parties as leaders, who are then in charge of proposing new blocks and driving consensus forward, among all their participants. The lotteries that elect those leaders need to ensure that adversarial parties are not elected disproportionately often and that an adversary can not tell who was elected before those parties decide to speak, as this would potentially allow for denial-of-service attacks. Whenever an elected party speaks, it needs to provide a winning lottery ticket, which proves that the party did indeed win the lottery. Current solutions require all published winning tickets to be stored individually on-chain, which introduces undesirable storage overheads.

In this work, we introduce {non-interactive aggregatable lotteries} and show how these can be constructed efficiently. Our lotteries provide the same security guarantees as previous lottery constructions, but additionally allow any third party to take a set of published winning tickets and aggregate them into one short digest. We provide a formal model of our new primitive in the universal composability framework.

As one of our main technical contributions, which may be of independent interest, we introduce aggregatable vector commitments with simulation-extractability and present a concretely efficient construction thereof in the algebraic group model in the presence of a random oracle. We show how these commitments can be used to construct non-interactive aggregatable lotteries.

We have implemented our construction, called {Jackpot}, and provide benchmarks that underline its concrete efficiency.
Expand
Giuseppe Ateniese, Foteini Baldimtsi, Matteo Campanelli, Danilo Francati, Ioanna Karantaidou
ePrint Report ePrint Report
Proof-of-Replication (PoRep) plays a pivotal role in decentralized storage networks, serving as a mechanism to verify that provers consistently store retrievable copies of specific data. While PoRep’s utility is unquestionable, its implementation in large-scale systems, such as Filecoin, has been hindered by scalability challenges. Most existing PoRep schemes, such as Fisch’s (Eurocrypt 2019), face an escalating number of challenges and growing computational overhead as the number of stored files increases. This paper introduces a novel PoRep scheme distinctively tailored for expansive decentralized storage networks. At its core, our approach hinges on polynomial evaluation, diverging from the probabilistic checking prevalent in prior works. Remarkably, our design requires only a single challenge, irrespective of the number of files, ensuring both prover’s and verifier’s run-times remain manageable even as file counts soar. Our approach introduces a paradigm shift in PoRep designs, offering a blueprint for highly scalable and efficient decentralized storage solutions.
Expand
Andre Esser, Paolo Santini
ePrint Report ePrint Report
Cryptographic constructions often base security on structured problem variants to enhance efficiency or to enable advanced functionalities. This led to the introduction of the Regular Syndrome Decoding (RSD) problem, which guarantees that a solution to the Syndrome Decoding (SD) problem follows a particular block-wise structure. Despite recent attacks exploiting that structure by Briaud and Øygarden (Eurocrypt ’23) and Carozza, Couteau and Joux (CCJ, Eurocrypt ’23), many questions about the impact of the regular structure on the problem hardness remain open.

In this work we initiate a systematic study of the hardness of the RSD problem starting from its asymptotics. We classify different parameter regimes revealing large regimes for which RSD instances are solvable in polynomial time and on the other hand regimes that lead to particularly hard instances. Against previous perceptions, we show that a classification solely based on the uniqueness of the solution is not sufficient for isolating the worst case parameters. Further we provide an in-depth comparison between SD and RSD in terms of reducibility and computational complexity, identifying regimes in which RSD instances are actually harder to solve.

We provide the first asymptotic analysis of the CCJ algorithm, establishing its worst case decoding complexity as $2^{0.141n}$. We then introduce \emph{regular-ISD} algorithms by showing how to tailor the whole machinery of advanced Information Set Decoding (ISD) techniques from attacking SD to the RSD setting. The fastest regular-ISD algorithm improves the worst case decoding complexity significantly to $2^{0.112n}$. Eventually, we show that also with respect to suggested parameters regular-ISD outperforms previous approaches in most cases, reducing security levels by up to 40 bits.
Expand
Yujin Yang, Kyungbae Jang, Yujin Oh, Hwajeong Seo
ePrint Report ePrint Report
The advancement of large-scale quantum computers poses a threat to the security of current encryption systems. In particular, symmetric-key cryptography significantly is impacted by general attacks using the Grover's search algorithm. In recent years, studies have been presented to estimate the complexity of Grover's key search for symmetric-key ciphers and assess post-quantum security. In this paper, we propose a depth-optimized quantum circuit implementation for ARIA, which is a symmetric key cipher included as a validation target the Korean Cryptographic Module Validation Program (KCMVP). Our quantum circuit implementation for ARIA improves the depth by more than 88.2% and Toffoli depth by more than 98.7% compared to the implementation presented in Chauhan et al.'s SPACE'20 paper. Finally, we present the cost of Grover's key search for our circuit and evaluate the post-quantum security strength of ARIA according to relevant evaluation criteria provided NIST.
Expand
Yujin Oh, Kyungbae Jang, Yujin Yang, Hwajeong Seo
ePrint Report ePrint Report
With the advancement of quantum computers, it has been demonstrated that Shor's algorithm enables public key cryptographic attacks to be performed in polynomial time. In response, NIST conducted a Post-Quantum Cryptography Standardization competition. Additionally, due to the potential reduction in the complexity of symmetric key cryptographic attacks to square root with Grover's algorithm, it is increasingly challenging to consider symmetric key cryptography as secure. In order to establish secure post-quantum cryptographic systems, there is a need for quantum post-quantum security evaluations of cryptographic algorithms. Consequently, NIST is estimating the strength of post-quantum security, driving active research in quantum cryptographic analysis for the establishment of secure post-quantum cryptographic systems. In this regard, this paper presents a depth-optimized quantum circuit implementation for SEED, a symmetric key encryption algorithm included in the Korean Cryptographic Module Validation Program (KCMVP). Building upon our implementation, we conduct a thorough assessment of the post-quantum security for SEED. Our implementation for SEED represents the first quantum circuit implementation for this cipher.
Expand
Hyunji Kim, Kyoungbae Jang, Yujin Oh, Woojin Seok, Wonhuck Lee, Kwangil Bae, Ilkwon Sohn, Hwajeong Seo
ePrint Report ePrint Report
Quantum computers, especially those with over 10,000 qubits, pose a potential threat to current public key cryptography systems like RSA and ECC due to Shor's algorithms. Grover's search algorithm is another quantum algorithm that could significantly impact current cryptography, offering a quantum advantage in searching unsorted data. Therefore, with the advancement of quantum computers, it is crucial to analyze potential quantum threats.

While many works focus on Grover’s attacks in symmetric key cryptography, there has been no research on the practical implementation of the quantum approach for lattice-based cryptography. Currently, only theoretical analyses involve the application of Grover's search to various Sieve algorithms.

In this work, for the first time, we present a quantum NV Sieve implementation to solve SVP, posing a threat to lattice-based cryptography. Additionally, we implement the extended version of the quantum NV Sieve (i.e., the dimension and rank of the lattice vector). Our extended implementation could be instrumental in extending the upper limit of SVP (currently, determining the upper limit of SVP is a vital factor). Lastly, we estimate the quantum resources required for each specific implementation and the application of Grover's search.

In conclusion, our research lays the groundwork for the quantum NV Sieve to challenge lattice-based cryptography. In the future, we aim to conduct various experiments concerning the extended implementation and Grover's search.
Expand
Binwu Xiang, Jiang Zhang, Yi Deng, Yiran Dai, Dengguo Feng
ePrint Report ePrint Report
Blind rotation is one of the key techniques to construct fully homomorphic encryptions with the best known bootstrapping algorithms running in less than one second. Currently, the two main approaches, namely, AP and GINX, for realizing blind rotation are first introduced by Alperin-Sheriff and Peikert (CRYPTO 2014) and Gama, Izabachene, Nguyen and Xie (EUROCRYPT 2016), respectively.

\qquad In this paper, we propose a new blind rotation algorithm based on a GSW-like encryption from the NTRU assumption. Our algorithm has performance asymptotically independent from the key distributions, and outperforms AP and GINX in both the evaluation key size and the computational efficiency(especially for large key distributions). By using our blind rotation algorithm as a building block, we present new bootstrapping algorithms for both LWE and RLWE ciphertexts.

We implement our bootstrapping algorithm for LWE ciphertexts, and compare the actual performance with two bootstrapping algorithms, namely, FHEW/AP by Ducas and Micciancio (EUROCRYPT 2015) and TFHE/GINX by Chillotti, Gama, Georgieva and Izabach\`ene (Journal of Cryptology 2020), that were implemented in the OpenFHE library. For parameters with ternary key distribution at 128-bit security, our bootstrapping only needs to store evaluation key of size 18.65MB for blind rotation, which is about 89.8 times smaller than FHEW/AP and 2.9 times smaller than TFHE/GINX. Moreover, our bootstrapping can be done in 112ms on a laptop, which is about 3.2 times faster than FHEW/AP and 2.1 times faster than TFHE/GINX. More improvements are available for large key distributions such as Gaussian distributions.
Expand
Akira Ito, Rei Ueno, Rikuma Tanaka, Naofumi Homma
ePrint Report ePrint Report
This paper formally analyzes two major non-profiled deep-learning-based side-channel attacks (DL-SCAs): differential deep-learning analysis (DDLA) by Timon and collision DL-SCA by Staib and Moradi. These DL-SCAs leverage supervised learning in non-profiled scenarios. Although some intuitive descriptions of these DL-SCAs exist, their formal analyses have been rarely conducted yet, which makes it unclear why and when the attacks succeed and how the attack can be improved. In this paper, we provide the first information-theoretical analysis of DDLA. We reveal its relevance to the mutual information analysis (MIA), and then present three theorems stating some limitations and impossibility results of DDLA. Subsequently, we provide the first probability-theoretical analysis on collision DL-SCA. After presenting its formalization with a proposal of our distinguisher for collision DL-SCA, we prove its optimality. Namely, we prove that the collision DL-SCA using our distinguisher theoretically maximizes the success rate if the neural network (NN) training is completely successful (namely, the NN completely imitates the true conditional probability distribution). Accordingly, we propose an improvement of the collision DL-SCA based on a dedicated NN architecture and a full-key recovery methodology using multiple neural distinguishers. Finally, we experimentally evaluate non-profiled (DL-)SCAs using a newly created dataset using publicly available first-order masked AES implementation. The existing public dataset of side-channel traces is insufficient to evaluate collision DL-SCAs due to a lack of substantive side-channel traces for different key values. Our dataset enables a comprehensive evaluation of collision (DL-)SCAs, which clarifies the current situation of non-profiled (DL-)SCAs.
Expand
Yansong Feng, Abderrahmane Nitaj, Yanbin Pan
ePrint Report ePrint Report
The Implicit Factorization Problem (IFP) was first introduced by May and Ritzenhofen at PKC'09, which concerns the factorization of two RSA moduli $N_1=p_1q_1$ and $N_2=p_2q_2$, where $p_1$ and $p_2$ share a certain consecutive number of least significant bits. Since its introduction, many different variants of IFP have been considered, such as the cases where $p_1$ and $p_2$ share most significant bits or middle bits at the same positions. In this paper, we consider a more generalized case of IFP, in which the shared consecutive bits can be located at $any$ positions in each prime, not necessarily required to be located at the same positions as before. We propose a lattice-based algorithm to solve this problem under specific conditions, and also provide some experimental results to verify our analysis.
Expand
Dipayan Saha, Shams Tarek, Katayoon Yahyaei, Sujan Kumar Saha, Jingbo Zhou, Mark Tehranipoor, Farimah Farahmandi
ePrint Report ePrint Report
As the ubiquity and complexity of system-on-chip (SoC) designs increase across electronic devices, the task of incorporating security into an SoC design flow poses significant challenges. Existing security solutions are inadequate to provide effective verification of modern SoC designs due to their limitations in scalability, comprehensiveness, and adaptability. On the other hand, Large Language Models (LLMs) are celebrated for their remarkable success in natural language understanding, advanced reasoning, and program synthesis tasks. Recognizing an opportunity, our research delves into leveraging the emergent capabilities of Generative Pre-trained Transformers (GPTs) to address the existing gaps in SoC security, aiming for a more efficient, scalable, and adaptable methodology. By integrating LLMs into the SoC security verification paradigm, we open a new frontier of possibilities and challenges to ensure the security of increasingly complex SoCs. This paper offers an in-depth analysis of existing works, showcases practical case studies, demonstrates comprehensive experiments, and provides useful promoting guidelines. We also present the achievements, prospects, and challenges of employing LLM in different SoC security verification tasks.
Expand
Samuel Hand, Alexander Koch, Pascal Lafourcade, Daiki Miyahara, Léo Robert
ePrint Report ePrint Report
A zero-knowledge proof (ZKP) allows a party to prove to another party that it knows some secret, such as the solution to a difficult puzzle, without revealing any information about it. We propose a physical zero-knowledge proof using only a deck of playing cards for solutions to a pencil puzzle called \emph{Moon-or-Sun}. In this puzzle, one is given a grid of cells on which rooms, marked by thick black lines surrounding a connected set of cells, may contain a number of cells with a moon or a sun symbol. The goal is to find a loop passing through all rooms exactly once, and in each room either passes through all cells with a moon, or all cells with a sun symbol. Finally, whenever the loop passes from one room to another, it must go through all cells with a moon if in the previous room it passed through all cells with a sun, and visa-versa. This last rule constitutes the main challenge for finding a physical zero-knowledge proof for this puzzle, as this must be verified without giving away through which borders the loop enters or leaves a given room. We design a card-based zero-knowledge proof of knowledge protocol for Moon-or-Sun solutions, together with an analysis of their properties. Our technique of verifying the alternation of a pattern along a non-disclosed path might be of independent interest for similar puzzles.
Expand

12 October 2023

Toronto, Canada, 23 March - 24 March 2024
Event Calendar Event Calendar
Event date: 23 March to 24 March 2024
Submission deadline: 22 November 2023
Notification: 22 January 2024
Expand
◄ Previous Next ►