International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

13 June 2024

Sathvika Balumuri, Edward Eaton, Philippe Lamontagne
ePrint Report ePrint Report
Key blinding produces pseudonymous digital identities by rerandomizing public keys of a digital signature scheme. It is used in anonymous networks to provide the seemingly contradictory goals of anonymity and authentication. Current key blinding schemes are based on the discrete log assumption. Eaton, Stebila and Stracovsky (LATINCRYPT 2021) proposed the first key blinding schemes from lattice assumptions. However, the large public keys and lack of QROM security means they are not ready to replace existing solutions.

We present a new way to build key blinding schemes form any MPC-in-the-Head signature scheme. These schemes rely on well-studied symmetric cryptographic primitives and admit short public keys. We prove a general framework for constructing key blinding schemes and for proving their security in the quantum random oracle model (QROM).

We instantiate our framework with the recent AES-based Helium signature scheme (Kales and Zaverucha, 2022). Blinding Helium only adds a minor overhead to the signature and verification time. Both Helium and the aforementioned lattice-based key blinding schemes were only proven secure in the ROM. This makes our results the first QROM proof of Helium and the first fully quantum-safe public key blinding scheme.
Expand
Navid Alamati, Varun Maram
ePrint Report ePrint Report
Security against chosen-ciphertext attacks (CCA) concerns privacy of messages even if the adversary has access to the decryption oracle. While the classical notion of CCA security seems to be strong enough to capture many attack scenarios, it falls short of preserving the privacy of messages in the presence of quantum decryption queries, i.e., when an adversary can query a superposition of ciphertexts.

Boneh and Zhandry (CRYPTO 2013) defined the notion of quantum CCA (qCCA) security to guarantee privacy of messages in the presence of quantum decryption queries. However, their construction is based on an exotic cryptographic primitive (namely, identity-based encryption with security against quantum queries), for which only one instantiation is known. In this work, we comprehensively study qCCA security for public-key encryption (PKE) based on both generic cryptographic primitives and concrete assumptions, yielding the following results:

* We show that key-dependent message secure encryption (along with PKE) is sufficient to realize qCCA-secure PKE. This yields the first construction of qCCA-secure PKE from the LPN assumption.

* We prove that hash proof systems imply qCCA-secure PKE, which results in the first instantiation of PKE with qCCA security from (isogeny-based) group actions.

* We extend the notion of adaptive TDFs (ATDFs) to the quantum setting by introducing quantum ATDFs, and we prove that quantum ATDFs are sufficient to realize qCCA-secure PKE. We also show how to instantiate quantum ATDFs from the LWE assumption.

* We show that a single-bit qCCA-secure PKE is sufficient to realize a multi-bit qCCA-secure PKE by extending the completeness of bit encryption for CCA security to the quantum setting.
Expand
Chaya Ganesh, Vineet Nair, Ashish Sharma
ePrint Report ePrint Report
We introduce a primitive called a dual polynomial commitment scheme that allows linking together a witness committed to using a univariate polynomial commitment scheme with a witness inside a multilinear polynomial commitment scheme. This yields commit-and-prove (CP) SNARKs with the flexibility of going back and forth between univariate and multilinear encodings of witnesses. This is in contrast to existing CP frameworks that assume compatible polynomial commitment schemes between different component proofs systems. In addition to application to CP, we also show that our notion yields a version of Spartan with better proof size and verification complexity, at the cost of a more expensive prover.

We achieve this via a combination of the following technical contributions: (i) we construct a new univariate commitment scheme in the updatable SRS setting that has better prover complexity than KZG (ii) we construct a new multilinear commitment scheme in the updatable setting that is compatible for linking with our univariate scheme (iii) we construct an argument of knowledge to prove a given linear relationship between two witnesses committed using a two-tiered commitment scheme (Pedersen+AFG) using Dory as a black-box. These constructions are of independent interest.

We implement our commitment schemes and report on performance. We also implement the version of Spartan with our dual polynomial commitment scheme and demonstrate that it outperforms Spartan in proof size and verification complexity.
Expand
Riccardo Taiello, Melek Önen, Clémentine Gritti, Marco Lorenzi
ePrint Report ePrint Report
Secure Aggregation (SA) stands as a crucial component in modern Federated Learning (FL) systems, facilitating collaborative training of a global machine learning model while protecting the privacy of individual clients' local datasets. Many existing SA protocols described in the FL literature operate synchronously, leading to notable runtime slowdowns due to the presence of stragglers (i.e. late-arriving clients). To address this challenge, one common approach is to consider stragglers as client failures and use SA solutions that are robust against dropouts. While this approach indeed seems to work, it unfortunately affects the performance of the protocol as its cost strongly depends on the dropout ratio and this ratio has increased significantly when taking stragglers into account. Another approach explored in the literature to address stragglers is to introduce asynchronicity into the FL system. Very few SA solutions exist in this setting and currently suffer from high overhead. In this paper, similar to related work, we propose to handle stragglers as client failures but design SA solutions that do not depend on the dropout ratio so that an unavoidable increase on this metric does not affect the performance of the solution. We first introduce Eagle, a synchronous SA scheme designed not to depend on the client failures but on the online users' inputs only. This approach offers better computation and communication costs compared to existing solutions under realistic settings where the number of stragglers is high. We then propose Owl, the first SA solution that is suitable for the asynchronous setting and once again considers online clients' contributions only. We implement both solutions and show that: (i) in a synchronous FL with realistic dropout rates (taking potential stragglers into account), Eagle outperforms the best SA solution, namely Flamingo, by X4; (ii) In the asynchronous setting, Owl exhibits the best performance compared to the state-of-the-art solution LightSecAgg.
Expand
King's College London
Job Posting Job Posting

The candidate will work alongside Prof. Martin Albrecht, Dr. Benjamin Dowling, Dr. Rikke Bjerg Jensen (Royal Holloway University of London) and Dr. Andrea Medrado (Exeter) on establishing social foundations of cryptography in protest settings. In particular, the candidate will work with a multi-disciplinary team of cryptographers (Dowling, Albrecht) and ethnographers (Jensen, Medrado) to understand the security needs of participants in protests, to formalise these needs as cryptographic security notions and to design or analyse cryptographic solutions with respect to these notions.

This position is part of the EPSRC-funded project “Social Foundations of Cryptography” and more information is available at https://social-foundations-of-cryptography.gitlab.io/.

In brief, ethnography is a social science method involving prolonged fieldwork, i.e. staying with the group under study, to observe not only what they say but also what their social reality and practice is. In this project, we are putting cryptography at the mercy of ethnographic findings, allowing them to shape what we model.

Closing date for applications:

Contact: Martin Albrecht <martin.albrecht@kcl.ac.uk>

More information: https://martinralbrecht.wordpress.com/2024/06/11/cryptography-postdoc-position-in-social-foundations-of-cryptography/

Expand

12 June 2024

Xuanming Liu, Jiawen Zhang, Yinghao Wang, Xinpeng Yang, Xiaohu Yang
ePrint Report ePrint Report
The trading of data is becoming increasingly important as it holds substantial value. A blockchain-based data marketplace can provide a secure and transparent platform for data exchange. To facilitate this, developing a fair data exchange protocol for digital goods has garnered considerable attention in recent decades. The Zero Knowledge Contingent Payment (ZKCP) protocol enables trustless fair exchanges with the aid of blockchain and zero-knowledge proofs. However, applying this protocol in a practical data marketplace is not trivial.

In this paper, several potential attacks are identified when applying the ZKCP protocol in a practical public data marketplace. To address these issues, we propose SmartZKCP, an enhanced solution that offers improved security measures and increased performance. The protocol is formalized to ensure fairness and secure against potential attacks. Moreover, SmartZKCP offers efficiency optimizations and minimized communication costs. Evaluation results show that SmartZKCP is both practical and efficient, making it applicable in a data exchange marketplace.
Expand
Xuanming Liu, Zhelei Zhou, Yinghao Wang, Jinye He, Bingsheng Zhang, Xiaohu Yang, Jiaheng Zhang
ePrint Report ePrint Report
Collaborative zk-SNARK (USENIX'22) allows multiple parties to jointly create a zk-SNARK proof over distributed secrets (also known as the witness). It provides a promising approach to proof outsourcing, where a client wishes to delegate the tedious task of proof generation to many servers from different locations, while ensuring no corrupted server can learn its witness (USENIX'23). Unfortunately, existing work remains a significant efficiency problem, as the protocols rely heavily on a particularly powerful server, and thus face challenges in achieving scalability for complex applications.

In this work, we address this problem by extending the existing zk-SNARKs Libra (Crypto'19) and HyperPlonk (Eurocrypt'23) into scalable collaborative zk-SNARKs. Crucially, our collaborative proof generation does not require a powerful server, and all servers take up roughly the same proportion of the total workload. In this way, we achieve privacy and scalability simultaneously for the first time in proof outsourcing. To achieve this, we develop an efficient MPC toolbox for a number of useful multivariate polynomial primitives, including sumcheck, productcheck, and multilinear polynomial commitment, which can also be applied to other applications as independent interests. For proof outsourcing purposes, when using $128$ servers to jointly generate a proof for a circuit size of $2^{24}$ gates, our benchmarks for these two collaborative proofs show a speedup of $21\times$ and $24\times$ compared to a local prover, respectively. Furthermore, we are able to handle enormously large circuits, making it practical for real-world applications.
Expand
A. Telveenus
ePrint Report ePrint Report
The cryptosystem RSA is a very popular cryptosystem in the study of Cryptography. In this article, we explore how the idea of a primitive m th root of unity in a ring can be integrated into the Discrete Fourier Transform, leading to the development of new cryptosystems known as RSA-DFT and RSA-HGR.
Expand
Zoë Ruha Bell, Shafi Goldwasser, Michael P. Kim, Jean-Luc Watson
ePrint Report ePrint Report
In past years, entire research communities have arisen to address concerns of privacy and fairness in data analysis. At present, however, the public must trust that institutions will re-implement algorithms voluntarily to account for these social concerns. Due to additional cost, widespread adoption is unlikely without effective legal enforcement. A technical challenge for enforcement is that the methods proposed are often probabilistic mechanisms, whose output must be drawn according to precise, and sometimes secret, distributions. The Differential Privacy (DP) case is illustrative: if a cheating curator answers queries according to an overly-accurate mechanism, privacy violations could go undetected. The need for effective enforcement raises the central question of our paper: Can we efficiently certify the output of a probabilistic mechanism enacted by an untrusted party? To this end:

(1) We introduce two new notions: Certified Probabilistic Mechanisms (CPM) and Random Variable Commitment Schemes (RVCS). A CPM is an interactive protocol that forces a prover to enact a given probabilistic mechanism or be caught; importantly, the interaction does not reveal secret parameters of the mechanism. An RVCS—a key primitive for constructing CPMs—is a commitment scheme where the verifier is convinced that the commitment is to an RV sampled according to an agreed-upon distribution, but learns nothing else.

(2) We instantiate the general notion of CPM for the special case of Certifying DP. We build a lightweight, doubly-efficent interactive proof system to certify arbitrary-predicate counting queries released via the DP Binomial mechanism. The construction relies on a commitment scheme with perfect hiding and additive homomorphic properties that can be used to release a broad class of queries about a committed database, which we construct on top of Pedersen commitments.

(3) Finally, we demonstrate the immediate feasibility of Certified DP via a highly-efficient and scalable prototype implementation to answer counting queries of arbitrary predicates. The mechanism is composed of an offline and online stage, where the online phase allows for non-interactive certification of queries. For example, we show that CDP queries over a US Census Public Use Microdata Sample (PUMS) ($n=7000$) can be completed in only 1.6 ms and verified in just 38 $\mu \text{s}$. Our implementation is available in open source at https://github.com/jlwatson/certified-dp.
Expand
Keyu Ji, Bingsheng Zhang, Hong-Sheng Zhou, Kui Ren
ePrint Report ePrint Report
Distributed Point Function (DPF) provides a way for a dealer to split a point function $f_{\alpha, \beta}$ into multiple succinctly described function-shares, where the function $f_{\alpha, \beta}$ for a special input $\alpha$, returns a special output value $\beta$, and returns a fixed value $0$ otherwise. As the security requirement, any strict subset of the function-shares reveals nothing about the function $f_{\alpha,\beta}$. However, each function-share can be individually evaluated on the common input $x$, and these evaluation results can then be merged together to reconstruct the value $f_{\alpha, \beta}(x)$.

Recently, Servan-Schreiber et al. (S&P 2023) investigate the access control problem for DPF; namely, the DPF evaluators can ensure that the DPF dealer is authorized to share the given function with privacy assurance. In this work, we revisit this problem, introducing a new notion called DPF with constraints; meanwhile, we identify that there exists a subtle flaw in their privacy definition as well as a soundness issue in one of their proposed schemes due to the lack of validation of the special output value $\beta$. Next, we show how to reduce both the storage size of the constraint representation and the server's computational overhead from $O(N)$ to $O(\log N)$, where $N$ is the number of authorized function sets. In addition, we show how to achieve fine-grained private access control, that is, the wildcard-style constraint for the choice of the special output $\beta$. Our benchmarks show that the amortized running time of our logarithmic storage scheme is $2\times$ - $3\times$ faster than the state-of-the-art when $N=2^{15}$. Furthermore, we provide the first impossibility and feasibility results of the DPF with constraints where the evaluators do not need to communicate with each other.
Expand
James Bell-Clark, Adrià Gascón, Baiyu Li, Mariana Raykova, Phillipp Schoppmann
ePrint Report ePrint Report
A common drawback of secure vector summation protocols in the single-server model is that they impose at least one synchronization point between all clients contributing to the aggregation. This results in clients waiting on each other to advance through the rounds of the protocol, leading to large latency even if the protocol is computationally efficient. In this paper we propose protocols in the single-server model where clients contributing data to the aggregation send a single message to the server in an asynchronous fashion, i.e., without the need for synchronizing their reporting time with any other clients. Our approach is based on a committee of parties, called decryptors, that aid in the computation. Decryptors run a setup phase before data collection starts, and a decryption phase once it ends. Unlike existing committee-based protocols such as Flamingo (S&P 2023), the cost for committee members can be made sub-linear in the number of clients, and does not depend on the size of the input data. Our experimental evaluation shows that our protocol, even while enabling asynchronous client contributions,is competitive with the state of the art protocols that do not have that feature in both computation and communication.
Expand
Matteo Scarlata, Matilda Backendal, Miro Haller
ePrint Report ePrint Report
Nair and Song (USENIX 2023) introduce the concept of a Multi-Factor Key Derivation Function (MFKDF), along with constructions and a security analysis. MFKDF integrates dynamic authentication factors, such as HOTP and hardware tokens, into password-based key derivation. The aim is to improve the security of password-derived keys, which can then be used for encryption or as an alternative to multi-factor authentication. The authors claim an exponential security improvement compared to traditional password-based key derivation functions (PBKDF).

We show that the MFKDF constructions proposed by Nair and Song fall short of the stated security goals. Underspecified cryptographic primitives and the lack of integrity of the MFKDF state lead to several attacks, ranging from full key recovery when an HOTP factor is compromised, to bypassing factors entirely or severely reducing their entropy. We reflect on the different threat models of key-derivation and authentication, and conclude that MFKDF is always weaker than plain PBKDF and multi-factor authentication in each setting.
Expand
Gil Segev, Liat Shapira
ePrint Report ePrint Report
In this work we first present an explicit forking lemma that distills the information-theoretic essence of the high-moment technique introduced by Rotem and Segev (CRYPTO '21), who analyzed the security of identification protocols and Fiat-Shamir signature schemes. Whereas the technique of Rotem and Segev was particularly geared towards two specific cryptographic primitives, we present a stand-alone probabilistic lower bound, which does not involve any underlying primitive or idealized model. The key difference between our lemma and previous ones is that instead of focusing on the tradeoff between the worst-case or expected running time of the resulting forking algorithm and its success probability, we focus on the tradeoff between higher moments of its running time and its success probability. Equipped with our lemma, we then establish concrete security bounds for the BN and BLS multi-signature schemes that are significantly tighter than the concrete security bounds established by Bellare and Neven (CCS '06) and Boneh, Drijvers and Neven (ASIACRYPT '18), respectively. Our analysis does not limit adversaries to any idealized algebraic model, such as the algebraic group model in which all algorithms are assumed to provide an algebraic justification for each group element they produce. Our bounds are derived in the random-oracle model based on the standard-model second-moment hardness of the discrete logarithm problem (for the BN scheme) and the computational co-Diffie-Hellman problem (for the BLS scheme). Such second-moment assumptions, asking that the success probability of any algorithm in solving the underlying computational problems is dominated by the second moment of the algorithm's running time, are particularly plausible in any group where no better-than-generic algorithms are currently known.
Expand
Brent Waters, David J. Wu
ePrint Report ePrint Report
We construct an adaptively-sound succinct non-interactive argument (SNARG) for NP in the CRS model from sub-exponentially-secure indistinguishability obfuscation ($i\mathcal{O}$) and sub-exponentially-secure one-way functions. Previously, Waters and Wu (STOC 2024), and subsequently, Waters and Zhandry (CRYPTO 2024) showed how to construct adaptively-sound SNARGs for NP by relying on sub-exponentially-secure indistinguishability obfuscation, one-way functions, and an additional algebraic assumption (i.e., discrete log, factoring, or learning with errors). In this work, we show that no additional algebraic assumption is needed and vanilla (sub-exponentially-secure) one-way functions already suffice in combination with $i\mathcal{O}$.

We first give a direct construction of an adaptively-sound SNARG for NP assuming (sub-exponentially-secure) $i\mathcal{O}$ and an injective one-way function. Then, we show that it suffices to have an injective one-way function that has an inefficient sampler (i.e., sampling a challenge for the one-way function requires super-polynomial time). Because we rely on the existence of injective one-way functions only in the security proof and not in the actual construction, having an inefficient sampling procedure does not impact correctness. We then show that injective one-way functions with an inefficient sampler can be built generically from any vanilla one-way function. Our approach may be independently useful in other settings to replace injective one-way functions with standard one-way functions in applications of $i\mathcal{O}$.
Expand
Aruna Jayasena, Richard Bachmann, Prabhat Mishra
ePrint Report ePrint Report
Software based cryptographic implementations provide flexibility but they face performance limitations. In contrast, hardware based cryptographic accelerators utilize application-specific customization to provide real-time security solutions. Cryptographic instruction-set extensions (CISE) combine the advantages of both hardware and software based solutions to provide higher performance combined with the flexibility of atomic-level cryptographic operations. While CISE is widely used to develop security solutions, side-channel analysis of CISE-based devices is in its infancy. Specifically, it is important to evaluate whether the power usage and electromagnetic emissions of CISE-based devices have any correlation with its internal operations, which an adversary can exploit to deduce cryptographic secrets. In this paper, we propose a test vector leakage assessment framework to evaluate the pre-silicon prototypes at the early stages of the design life-cycle. Specifically, we first identify functional units with the potential for leaking information through power side-channel signatures and then evaluate them on system prototypes by generating the necessary firmware to maximize the side-channel signature. Our experimental results on two RISC-V based cryptographic extensions, RISCV-CRYPTO and XCRYPTO, demonstrated that seven out of eight prototype AES- and SHA-related functional units are vulnerable to leaking cryptographic secrets through their power side-channel signature even in full system mode with a statistical significance of $\alpha = 0.05$.
Expand
Abtin Afshar, Jiaqi Cheng, Rishab Goyal
ePrint Report ePrint Report
Fully homomorphic signatures are a significant strengthening of digital signatures, enabling computations on \emph{secretly} signed data. Today, we have multiple approaches to design fully homomorphic signatures such as from lattices, or succinct functional commitments, or indistinguishability obfuscation, or mutable batch arguments. Unfortunately, all existing constructions for homomorphic signatures suffer from one or more limitations. We do not have homomorphic signatures with features such as multi-hop evaluation, context hiding, and fast amortized verification, while relying on standard falsifiable assumptions.

In this work, we design homomorphic signatures satisfying all above properties. We construct homomorphic signatures for polynomial-sized circuits from a variety of standard assumptions such as sub-exponential DDH, standard pairing-based assumptions, or learning with errors. We also discuss how our constructions can be easily extended to the multi-key setting.
Expand
Yuval Ishai, Mahimna Kelkar, Daniel Lee, Yiping Ma
ePrint Report ePrint Report
We revisit the problem of private information retrieval (PIR) in the shuffle model, where queries can be made anonymously by multiple clients. We present the first single-server PIR protocol in this model that has sublinear per-client communication and information-theoretic security. Moreover, following one-time preprocessing on the server side, our protocol only requires sublinear per-client computation. Concretely, for every $\gamma>0$, the protocol has $O(n^{\gamma})$ communication and computation costs per (stateless) client, with $1/\text{poly}(n)$ statistical security, assuming that a size-$n$ database is simultaneously accessed by $\text{poly}(n)$ clients. This should be contrasted with the recent breakthrough result of Lin, Mook, and Wichs (STOC 2023) on doubly efficient PIR in the standard model, which is (inherently) limited to computational security.
Expand
Itai Dinur
ePrint Report ePrint Report
We consider constructions that combine outputs of a single permutation $\pi:\{0,1\}^n \rightarrow \{0,1\}^n$ using a public function. These are popular constructions for achieving security beyond the birthday bound when implementing a pseudorandom function using a block cipher (i.e., a pseudorandom permutation). One of the best-known constructions (denoted SXoP$[2,n]$) XORs the outputs of 2 domain-separated calls to $\pi$.

Modeling $\pi$ as a uniformly chosen permutation, several previous works proved a tight information-theoretic indistinguishability bound for SXoP$[2,n]$ of about $q/2^{n}$, where $q$ is the number of queries. On the other hand, tight bounds are unknown for the generalized variant (denoted SXoP$[r,n]$) which XORs the outputs of $r>2$ domain-separated calls to a uniform permutation.

In this paper, we obtain two results. Our first result improves the known bounds for SXoP$[r,n]$ for all (constant) $r \geq 3$ (assuming $q \leq O(2^n/r)$ is not too large) in both the single-user and multi-user settings. In particular, for $q=3$, our bound is about $\sqrt{u}q_{\max}/2^{2.5n}$ (where $u$ is the number of users and $q_{\max}$ is the maximal number of queries per user), improving the best-known previous result by a factor of at least $2^n$.

For odd $r$, our bounds are tight for $q > 2^{n/2}$, as they match known attacks. For even $r$, we prove that our single-user bounds are tight by providing matching attacks.

Our second and main result is divided into two parts. First, we devise a family of constructions that output $n$ bits by efficiently combining outputs of 2 calls to a permutation on $\{0,1\}^n$, and achieve multi-user security of about $\sqrt{u} q_{\max}/2^{1.5n}$. Then, inspired by the CENC construction of Iwata [FSE'06], we further extend this family to output $2n$ bits by efficiently combining outputs of 3 calls to a permutation on $\{0,1\}^n$. The extended construction has similar multi-user security of $\sqrt{u} q_{\max}/2^{1.5n}$.

The new single-user ($u=1$) bounds of $q/2^{1.5n}$ for both families should be contrasted with the previously best-known bounds of $q/2^n$, obtained by the comparable constructions of SXoP$[2,n]$ and CENC.

All of our bounds are proved by Fourier analysis, extending the provable security toolkit in this domain in multiple ways.
Expand
Ritam Bhaumik, Bishwajit Chakraborty, Wonseok Choi, Avijit Dutta, Jérôme Govinden, Yaobin Shen
ePrint Report ePrint Report
Message Authentication Codes (MACs) are ubiquitous primitives deployed in multiple flavors through standards such as HMAC, CMAC, GMAC, LightMAC, and many others. Its versatility makes it an essential building block in applications necessitating message authentication and integrity checks, in authentication protocols, authenticated encryption schemes, or as a pseudorandom or key derivation function. Its usage in this variety of settings makes it susceptible to a broad range of attack scenarios. The latest attack trends leverage a lack of commitment or context-discovery security in AEAD schemes and these attacks are mainly due to the weakness in the underlying MAC part. However, these new attack models have been scarcely analyzed for MACs themselves. This paper provides a thorough treatment of MACs committing and context-discovery security. We reveal that commitment and context-discovery security of MACs have their own interest by highlighting real-world vulnerable scenarios. We formalize the required security notions for MACs, and analyze the security of standardized MACs for these notions. Additionally, as a constructive application, we analyze generic AEAD composition and provide simple and efficient ways to build committing and context-discovery secure AEADs.
Expand
Anjali C B
ePrint Report ePrint Report
he current cryptographic frameworks like RSA, ECC, and AES are potentially under quantum threat. Quantum cryptographic and post-quantum cryptography are being extensively researched for securing future information. The quantum computer and quantum algorithms are still in the early developmental stage and thus lack scalability for practical application. As a result of these challenges, most researched PQC methods are lattice-based, code-based, ECC isogeny, hash-based, and multivariate crypto schemes. In this paper, we explore other mathematical topics such as stereographic projection, Mobius transformation, change of basis, Apollonian circle, Binary Quadratic form equivalence, Gauss composition law, and its conjunctions. It fulfills preliminary conditions like bijection, primality, and np-hard problems, and the feasibility of one-way functions along with its interconnection. Thus allowing the exploration of new realms of mathematics for the development of secure protocols for future communication
Expand
◄ Previous Next ►