International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

18 March 2022

Youssef El Housni, Aurore Guillevic, Thomas Piellard
ePrint Report ePrint Report
An important cryptographic operation on elliptic curves is hashing to a point on the curve. When the curve is not of prime order, the point is multiplied by the cofactor so that the result has a prime order. This is important to avoid small subgroup attacks for example. A second important operation, in the composite-order case, is testing whether a point belongs to the subgroup of prime order. A pairing is a bilinear map e : G1 ×G2 → GT where G1 and G2 are distinct subgroups of primeorderrofanellipticcurve,andGT isamultiplicativesubgroupof the same prime order r of a finite field extension. Pairing-friendly curves are rarely of prime order. We investigate cofactor clearing and subgroup membership testing on these composite-order curves. First, we general- ize a result on faster cofactor clearing for BLS curves to other pairing- friendly families of a polynomial form from the taxonomy of Freeman, Scott and Teske. Second, we investigate subgroup membership testing for G1 and G2. We fix a proof argument for the G2 case that appeared in a preprint by Scott in late 2021 and has recently been implemented in different cryptographic libraries. We then generalize the result to both G1 and G2 and apply it to different pairing-friendly families of curves. This gives a simple and shared framework to prove membership tests for both cryptographic subgroups.
Expand
Andreas Hülsing, Matthias Meijers, Pierre-Yves Strub
ePrint Report ePrint Report
In this work, we consider the formal verification of the public-key encryption scheme of Saber, one of the selected few post-quantum cipher suites currently considered for potential standardization. We formally verify this public-key encryption scheme's IND-CPA security and $\delta$-correctness properties, i.e., the properties required to transform the public-key encryption scheme into an IND-CCA2 secure and $\delta$-correct key encapsulation mechanism, in EasyCrypt. To this end, we initially devise hand-written proofs for these properties that are significantly more detailed and meticulous than the presently existing proofs. Subsequently, these hand-written proofs serve as a guideline for the formal verification. The results of this endeavor comprise hand-written and computer-verified proofs which demonstrate that Saber's public-key encryption scheme indeed satisfies the desired security and correctness properties.
Expand
Bruno Mazorra, Victor Adan, Vanesa Daza
ePrint Report ePrint Report
Uniswap, like other DEXs, has gained much attention this year because it is a non-custodial and publicly verifiable exchange that allows users to trade digital assets without trusted third parties. However, its simplicity and lack of regulation also makes it easy to execute initial coin offering scams by listing non-valuable tokens. This method of performing scams is known as rug pull, a phenomenon that already existed in traditional finance but has become more relevant in DeFi. Various projects such as [34,37] have contributed to detecting rug pulls in EVM compatible chains. However, the first longitudinal and academic step to detecting and characterizing scam tokens on Uniswap was made in [44]. The authors collected all the transactions related to the Uniswap V2 exchange and proposed a machine learning algorithm to label tokens as scams. However, the algorithm is only valuable for detecting scams accurately after they have been executed. This paper increases their data set by 20K tokens and proposes a new methodology to label tokens as scams. After manually analyzing the data, we devised a theoretical classification of different malicious maneuvers in Uniswap protocol. We propose various machine-learning-based algorithms with new relevant features related to the token propagation and smart contract heuristics to detect potential rug pulls before they occur. In general, the models proposed achieved similar results. The best model obtained an accuracy of 0.9936, recall of 0.9540, and precision of 0.9838 in distinguishing non-malicious tokens from scams prior to the malicious maneuver.
Expand

15 March 2022

Beijing Institute of Technology
Job Posting Job Posting
The group has a number of post-doc and tenure-track professor openings in the prestigious Advanced Research Institute of Multidisciplinary Sciences at Beijing Institute of Technology, China. We welcome candidates in security, cryptography, blockchains, and systems. The group regularly published papers in renowned venues such as CCS, S&P, DSN, OPODIS, ESORICS, FC, FSE, RSA, and SRDS and has designed and implemented many blockchain systems widely used in industry and academia.

Postdoc: Competitive salary. Housing/renting covered. The postdoc position is for two years and has flexible starting time. After two years, the candidates may be offered a tenure-track position at Beijing Institute of Technology.

Tenure-track professors: housing covered; salary is really competitive, can advise PhD students and postdocs; startup package included; etc.

Closing date for applications:

Contact: Please apply with a CV. Person in contact: Prof. Haibin Zhang: haibin at bit dot edu dot cn

More information: https://bchainzhang.github.io/hbzhang/

Expand
University of Birmingham, UK
Job Posting Job Posting

This is a time of significant opportunity for Computer Science at Birmingham, with a growing number of outstanding students, world- leading research, the establishment of new institutes, and a growing transnational education and industrial engagement. We are investing in the growth of the senior leadership of the school in a number of key research and education area, including but not limited to all areas of Cyber Security.

The Centre for Cyber Security and Privacy has 14 permanent academics as well as 21 postdocs/PhD students. Our expertise is established on a historic strength in the analysis of security systems using formal methods, and we broadened our scope to cover all aspects of cyber security. We have built an international reputation for our expertise areas such as applied cryptography, automotive security and secure infrastructure, hardware security and the security of IoT devices (https://www.bham.ac.uk/research/centre-for-cyber-security-and-privacy/index.aspx).

We have 3 distinct academic pathways, Research & Education, Education, and Enterprise, Engagement and Impact, and have opportunities in all of these pathways.

Closing date for applications:

Contact:

For informal information about Cyber Security at Birmingham, please contact Prof David Oswald, d.f.oswald@bham.ac.uk

For a confidential and informal discussion about details of the post, please contact Dr Mark Lee, m.g.lee@bham.ac.uk

More information: https://bham.taleo.net/careersection/external/jobdetail.ftl?job=220000G4&tz=GMT%2B00%3A00&tzname=Europe%2FLondon

Expand

14 March 2022

Antoine Leudière, Pierre-Jean Spaenlehauer
ePrint Report ePrint Report
We explore algorithmic aspects of a free and transitive commutative group action coming from the class field theory of imaginary hyperelliptic function fields. Namely, the Jacobian of an imaginary hyperelliptic curve defined over $\mathbb{F}_q$ acts on a subset of isomorphism classes of Drinfeld modules. We describe an algorithm to compute the group action efficiently. This is a function field analog of the Couveignes-Rostovtsev-Stolbunov group action. Our proof-of-concept C++/NTL implementation only requires a fraction of a second on a standard computer. Also, we state a conjecture --- supported by experiments --- which implies that the current fastest algorithm to solve its inverse problem runs in exponential time. This action is therefore a promising candidate for the construction of \textit{Hard Homogeneous Spaces}, which are the building blocks of several post-quantum cryptographic protocols. This demonstrates the relevance of using imaginary hyperelliptic curves and Drinfeld modules as an alternative to the standard setting of imaginary quadratic number fields and elliptic curves for isogeny-based cryptographic applications. Moreover, our function field setting enables the use of Kedlaya's algorithm and its variants for computing the order of the group in polynomial time when $q$ is fixed. No such polynomial-time algorithm for imaginary quadratic number fields is known. For $q=2$ and parameters similar to CSIDH-512, we compute this order more than 8500 times faster than the record computation for CSIDH-512 by Beullens, Kleinjung and Vercauteren.
Expand
Yu Dai, Kaizhan Lin, Zijian Zhou, Chang-An Zhao
ePrint Report ePrint Report
Pairing-based cryptographic protocols are typically vulnerable to small-subgroup attacks in the absence of protective measures. To thwart them, one of effective measures is to execute subgroup membership testings for the three $r$-order subgroups $\G_1$, $\G_2$ and $\G_T$, which are generally considered expensive. Inspired by the method given by Scott, we revisit this issue and generalize the testing method in this paper. Our method can be applied to a large class of curves, including curves admitting a twist and without a twist. The resulting implementation shows that for many popular pairing-friendly curves, the proposed technique significantly improves the performance of membership testings for the above three subgroups as compared with the fastest previously known one. More precisely, for $\G_2$ testing on curves admitting a twist, the new technique is about 1.9, 5.1, and 3.6 times faster than the previous one on \textit{BN-446}, \textit{KSS16-P310} and \textit{KSS18-P348}, respectively. For $\G_2$ testing on curves without a twist, there exists no efficient testing method for $\G_2$ in the literature until now. In this situation, the proposed method is about $17.3$ and $20$ times faster than the naive one on \textit{BW13-P310} and \textit{BW9-P286}, respectively.
Expand
Taechan Kim, Hyesun Kwak, Dongwon Lee, Jinyeong Seo, Yongsoo Song
ePrint Report ePrint Report
Homomorphic Encryption (HE) is a cryptosytem that allows us to perform an arbitrary computation on encrypted data. The standard HE, however, has a disadvantage in that the authority is concentrated in the secret key owner as the computation can be performed only on ciphertexts under the same key. In order to overcome this problem, research is underway on Multi-Key Homomorphic Encryption (MKHE), which enables operations between encrypted data possibly under different keys. Despite its strength to cover privacy of multiple parties, the existing MKHE schemes suffer from poor performance that the multiplication cost grows at least quadratically with the number of parties involved.

In this paper, we propose a new notion of the gadget decomposition, which enables arithmetic operations to be performed on the decomposed vectors with guarantee of functionality and noise bound. We redesign the multi-key multiplication algorithm of Chen et al. (ACM CCS 2019) using the homomorphic property of gadget decomposition and thereby reduce the complexity significantly from quadratic to linear in the number of parties involved. Finally, we implement our MKHE schemes and provide benchmarks which outperform the previous results.
Expand
Andreas Hülsing, Mikhail Kudinov
ePrint Report ePrint Report
In 2020, Kudinov, Kiktenko, and Fedorov pointed out a flaw in the tight security proof of the $SPHINCS^{+}$ construction. This work gives a new tight security proof for $SPHINCS^{+}$. The flaw can be traced back to the security proof for the Winternitz one-time signature scheme (WOTS) used within $SPHINCS^{+}$. In this work, we give a standalone description of the WOTS variant used in SPHINCS+ that we call WOTS-TW. We provide a security proof for WOTS-TW and multi-instance WOTS-TW against non-adaptive chosen message attacks where the adversary only learns the public key after it made its signature query. Afterwards, we show that this is sufficient to give a tight security proof for $SPHINCS^{+}$. We recover almost the same bound for the security of $SPHINCS^{+}$, with only a factor $w$ loss compared to the previously claimed bound, where w is the Winternitz parameter that is commonly set to 16. On a more technical level, we introduce new lower bounds on the quantum query complexity for generic attacks against properties of cryptographic hash functions and analyse the constructions of tweakable hash functions used in $SPHINCS^{+}$ with regard to further security properties.
Expand
Wouter Castryck, Marc Houben, Frederik Vercauteren, Benjamin Wesolowski
ePrint Report ePrint Report
We show how the Weil pairing can be used to evaluate the assigned characters of an imaginary quadratic order $\mathcal{O}$ in an unknown ideal class $[\mathfrak{a}] \in \mathrm{Cl}(\mathcal{O})$ that connects two given $\mathcal{O}$-oriented elliptic curves $(E, \iota)$ and $(E', \iota') = [\mathfrak{a}](E, \iota)$. When specialized to ordinary elliptic curves over finite fields, our method is conceptually simpler and often faster than a recent approach due to Castryck, Sot\'akov\'a and Vercauteren, who rely on the Tate pairing instead. The main implication of our work is that it breaks the decisional Diffie--Hellman problem for practically all oriented elliptic curves that are acted upon by an even-order class group. It can also be used to better handle the worst cases in Wesolowski's recent reduction from the vectorization problem for oriented elliptic curves to the endomorphism ring problem, leading to a method that always works in sub-exponential time.
Expand
William Wang
ePrint Report ePrint Report
We describe a simple MPC protocol in the preprocessing model for computing multivariate quadratic maps. This yields Mesquite, a KKW-style signature scheme which, to our knowledge, produces the shortest signatures of any based on the MQ assumption. For example, our compact parameter set targeting NIST security level I has an average signature size of 8.8KB and runtimes on par with Picnic3 L1.
Expand
Yuval Ishai, Alexis Korb, Paul Lou, Amit Sahai
ePrint Report ePrint Report
A wiretap coding scheme (Wyner, Bell Syst. Tech. J. 1975) enables Alice to reliably communicate a message m to an honest Bob by sending an encoding c over a noisy channel chB, while at the same time hiding m from Eve who receives c over another noisy channel chE.

Wiretap coding is clearly impossible when chB is a degraded version of chE, in the sense that the output of chB can be simulated using only the output of chE. A classic work of Csiszár and Körner (IEEE Trans. Inf. Theory, 1978) shows that the converse does not hold. This follows from their full characterization of the channel pairs (chB, chE) that enable information-theoretic wiretap coding. In this work, we show that in fact the converse does hold when considering computational security; that is, wiretap coding against a computationally bounded Eve is possible if and only if chB is not a degraded version of chE. Our construction assumes the existence of virtual black-box (VBB) obfuscation of specific classes of ``evasive'' functions that generalize fuzzy point functions, and can be heuristically instantiated using indistinguishability obfuscation. Finally, our solution has the appealing feature of being universal in the sense that Alice's algorithm depends only on chB and not on chE.
Expand
Lorenzo Grassi, Morten Øygarden, Markus Schofnegger, Roman Walch
ePrint Report ePrint Report
Ciminion is an MPC-friendly pseudo-random function (PRF) recently proposed at Eurocrypt’21. As in the case of other MPC-friendly constructions proposed in the literature (e.g., MiMC, HadesMiMC, Rescue), it aims to minimize the number of multiplications in large finite fields. While MiMC, HadesMiMC, and Rescue are block ciphers, Ciminion is a (modified) Farfalle-like cryptographic function. At the current state of the art, it achieves the best performance in MPC applications. However, Ciminion has a critical weakness. Its security highly relies on the independence of its subkeys, which is achieved by using an expensive key schedule. Many MPC use cases involving symmetric PRFs rely on secretly shared symmetric keys, and hence the expensive key schedule must also be computed in MPC. As a result, Ciminion’s performance is significantly reduced in these use cases. In this paper, we solve this problem. Following the approach introduced by Ciminion’s designers, we propose Megafono, a modified version of Farfalle designed for achieving a small multiplicative complexity without any key schedule. Following this strategy, we present the PRF Hydra, which utilizes both a Lai-Massey construction and a novel construction we name Amaryllises in its nonlinear layer. Amaryllises can be seen as a generalized variant of a Lai-Massey scheme, which allows us to further decrease the multiplicative complexity of Hydra. Based on an extensive security analysis, we implement Hydra in an MPC framework. The results show that it outperforms all MPC-friendly schemes currently published in the literature.
Expand
Nicoleta-Norica Băcuieți, Lejla Batina, Stjepan Picek
ePrint Report ePrint Report
At CRYPTO'19, A. Gohr proposed neural distinguishers for the lightweight block cipher Speck32/64, achieving better results than the state-of-the-art at that point. However, the motivation for using that particular architecture was not very clear, leading us to investigate whether a smaller and/or better performing neural distinguisher exists. This paper studies the depth-10 and depth-1 neural distinguishers proposed by Gohr with the aim of finding out whether smaller or better-performing distinguishers for Speck32/64 exist. We first evaluate whether we can find smaller neural networks that match the accuracy of the proposed distinguishers. We answer this question in affirmative with the depth-1 distinguisher successfully pruned, resulting in a network that remained within one percentage point of the unpruned network's performance. Having found a smaller network that achieves the same performance, we examine if its performance can be improved as well. We also study whether processing the input before giving it to the pruned depth-1 network would improve its performance. To this end, convolutional autoencoders were found that managed to reconstruct the ciphertext pairs successfully, and their trained encoders were used as a preprocessor before training the pruned depth-1 network. We found that, even though the autoencoders achieve a perfect reconstruction, the pruned network did not have the necessary complexity anymore to extract useful information from the preprocessed input, motivating us to look at the feature importance to get more insights. To achieve this, we used LIME, with results showing that a stronger explainer is needed to assess it correctly.
Expand
Azade Rezaeezade, Guilherme Perin, Stjepan Picek
ePrint Report ePrint Report
Profiling side-channel analysis allows evaluators to estimate the worst-case security of a target. When security evaluations relax the assumptions about the adversary's knowledge, profiling models may easily be sub-optimal due to the inability to extract the most informative points of interest from the side-channel measurements. When used for profiling attacks, deep neural networks can learn strong models without feature selection with the drawback of expensive hyperparameter tuning. Unfortunately, due to very large search spaces, one usually finds very different model behaviors, and a widespread situation is to face overfitting with typically poor generalization capacity.

Usually, overfitting or poor generalization would be mitigated by adding more measurements to the profiling phase to reduce estimation errors. This paper provides a detailed analysis of different deep learning model behaviors and shows that adding more profiling traces as a single solution does not necessarily help improve generalization. In fact, we recognize the main problem to be the sub-optimal selection of hyperparameters, which is then difficult to resolve by simply adding more measurements. Instead, we propose to use small hyperparameter tweaks or regularization as techniques to resolve the problem.
Expand
Igor Semaev
ePrint Report ePrint Report
Every public-key encryption/decryption algorithm where the set of possible plain-texts is identical to the set of possible cipher-texts may be converted into a digital signature algorithm. That is quite different in the lattice (code)-based public-key cryptography. The decryption algorithm on a random input produces a valid plain-text, that is a signature, with a negligible probability. That explains why it is so difficult to construct a new secure and efficient lattice-based digital signature system. Though several solutions are known and taking part in the NIST Post Quantum Standardisation Process there is still a need to construct digital signature algorithms based on new principles. In this work, a new and efficient digital signature algorithm is suggested. Its design is simple and transparent. Its security is based on the hardness of an approximate closest vector problem in the maximum norm for some q-ary lattices. The complexity parameters are comparable with those of the round 3 NIST digital signature candidates.
Expand
Koji Chida, Koki Hamada, Atsunori Ichikawa, Masanobu Kii, Junichi Tomida
ePrint Report ePrint Report
We propose secure two-party computation for a new functionality that we call Private Intersection-Weighted-Sum (PIW-Sum) and present an efficient semi-honestly secure protocol. In this computation, two parties own integer matrices $\mathbf{X}$ and $\mathbf{Y}$, respectively, where each row of the matrices is associated with an identifier. Let $I=(i_{1} ,..., i_{n})$ be the intersection of the identifier sets for the two parties. The goal is to compute the matrix $\widehat{\mathbf{Y}}^{\top}\widehat{\mathbf{X}}$ together with the cardinality $|I|$, where the $j$-th rows of $\widehat{\mathbf{X}}$ and $\widehat{\mathbf{Y}}$ are the rows of $\mathbf{X}$ and $\mathbf{Y}$ with identifier $i_{j}$, respectively. This functionality is a generalization of Private Intersection-Sum (PI-Sum) proposed by Ion et al.~(EuroS\&P'20) and has important real-world applications such as computing a cross tabulation after the equijoin of two tables owned by different parties.

Our protocol is built on a new variant of oblivious pseudorandom function (OPRF), and we construct the new variant of OPRF from the decisional Diffie-Hellman (DDH) assumption. We implement both our PIW-Sum protocol and the the most efficient PI-Sum protocol by Ion et al.~and compare their performance in the same environment. This shows that both communication cost and computational cost of our protocol are only about 2 times greater than those of the PI-Sum protocol in the case where $\mathbf{X}$ and $\mathbf{Y}$ are column vectors, i.e., the number of columns of $\mathbf{X}$ and $\mathbf{Y}$ is one.
Expand
Matthias J. Kannwischer, Peter Schwabe, Douglas Stebila, Thom Wiggers
ePrint Report ePrint Report
The NIST post-quantum cryptography (PQC) standardization project is probably the largest and most ambitious cryptography standardization effort to date, and as such it makes an excellent case study of cryptography standardization projects. It is expected that with the end of round 3 in early 2022, NIST will announce the first set of primitives to advance to standardization, so it seems like a good time to look back and see what lessons can be learned from this effort. In this paper, we take a look at one specific aspect of the NIST PQC project: software implementations. We observe that many implementations included as a mandatory part of the submission packages were of poor quality and ignored decades-old standard techniques from software engineering to guarantee a certain baseline quality level. As a consequence, it was not possible to readily use those implementations in experiments for post-quantum protocol migration and software optimization efforts without first spending a significant amount of time to clean up the submitted reference implementations.

We do not mean to criticize cryptographers who submitted proposals, including software implementations, to NIST PQC: after all, it cannot reasonably be expected from every cryptographer to also have expertise in software engineering. Instead, we suggest how standardization bodies like NIST can improve the software-submission process in future efforts to avoid such issues with submitted software. More specifically, we present PQClean, an extensive (continuous-integration) testing framework for PQC software, which now also contains "clean" implementations of the NIST round 3 candidate schemes. We argue that the availability of such a framework---either in an online continuous-integration setup, or just as an offline testing system---long before the submission deadline would have resulted in much better implementations included in NIST PQC submissions and overall would have saved the community and probably also NIST a lot of time and effort.
Expand
Brent Waters, David J. Wu
ePrint Report ePrint Report
A non-interactive batch argument for NP provides a way to amortize the cost of NP verification across multiple instances. They enable a prover to convince a verifier of multiple NP statements with communication much smaller than the total witness length and verification time much smaller than individually checking each instance.

In this work, we give the first construction of a non-interactive batch argument for NP from standard assumptions on groups with bilinear maps (specifically, from either the subgroup decision assumption in composite-order groups or from the $k$-Lin assumption in prime-order groups for any $k \ge 1$). Previously, batch arguments for NP were only known from LWE, or a combination of multiple assumptions, or from non-standard/non-falsifiable assumptions. Moreover, our work introduces a new direct approach for batch verification and avoids heavy tools like correlation-intractable hash functions or probabilistically-checkable proofs common to previous approaches.

As corollaries to our main construction, we also obtain the first publicly-verifiable non-interactive delegation scheme for RAM programs with a CRS of sublinear size (in the running time of the RAM program), as well as the first aggregate signature scheme (supporting bounded aggregation) from standard assumptions on bilinear maps.
Expand
Tuan-Hong Chua, Iftekhar Salam
ePrint Report ePrint Report
Cybersecurity has become one of the focuses of organisations. The number of cyberattacks keeps increasing as Internet usage continues to grow. An intrusion detection system (IDS) is an alarm system that helps to detect cyberattacks. As new types of cyberattacks continue to emerge, researchers focus on developing machine learning (ML)-based IDS to detect zero-day attacks. Researchers usually remove some or all attack samples from the training dataset and only include them in the testing dataset when evaluating the performance of an IDS on detecting zero-day attacks. Although this method may show the ability of an IDs to detect unknown attacks; however, it does not reflect the long-term performance of the IDS as it only shows the changes in the type of attacks. In this paper, we focus on evaluating the long-term performance of ML-based IDS. To achieve this goal, we propose evaluating the ML-based IDS using a dataset that is created later than the training dataset. The proposed method can better assess the long-term performance of an ML-based IDS, as the testing dataset reflects the changes in the type of attack and the changes in network infrastructure over time. We have implemented six of the most popular ML models that are used for IDS, including decision tree (DT), random forest (RF), support vector machine (SVM), naïve Bayes (NB), artificial neural network (ANN) and deep neural network (DNN). Our experiments using the CIC-IDS2017 and the CSE-CIC-IDS2018 datasets show that SVM and ANN are most resistant to overfitting. Besides that, our experiment results also show that DT and RF suffer the most from overfitting, although they perform well on the training dataset. On the other hand, our experiments using the LUFlow dataset have shown that all models can perform well when the difference between the training and testing datasets is small.
Expand
◄ Previous Next ►