IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
23 April 2022
Kaisei Kajita, Go Ohtake, Kazuto Ogawa, Koji Nuida, Tsuyoshi Takagi
We propose a short signature scheme under the ring-SIS assumption in the standard model.
Specifically, by revisiting an existing construction [Ducas and Micciancio, CRYPTO 2014], we demonstrate lattice-based signatures with improved reduction loss. As far as we know, there are no ways to use multiple tags in the signature simulation of security proof in the lattice tag-based signatures. We address the tag-collision possibility in the lattice setting, which improves reduction loss. Our scheme generates tags from messages by constructing a scheme under a mild security condition that is existentially unforgeable against random message attack with auxiliary information. Thus our scheme can reduce the signature size since it does not need to send tags with the signatures. Our scheme has short signature sizes of ?(1) and achieves tighter reduction loss than that of Ducas et al.’s scheme. Our proposed scheme has two variants. Our scheme with one property has tighter reduction and the same verification key size of ?(log ?) as that of Ducas et al.’s scheme, where ? is the security parameter. Our scheme with the other property achieves much tighter reduction loss of ?(?/?) and verification key size of ?(?), where ? is the number of signing queries.
Kazuhiko Minematsu
Property-preserving hash (PPH) function is a class of hash functions that allows an evaluation of the property of inputs from their hash values. Boyle et al. at ITCS 2019 recently introduced it and considered the robustness of PPH against an adversary who accesses the internal randomness of PPH, and proposed two robust PPH constructions for a weak form of Hamming distance predicate. The second construction received attention for its short hash value, although it relies on an ad-hoc security assumption. The first construction, which is entirely hash-based and based on the classical collision-resistance assumption, has been largely overlooked.
We study their first construction and discover its close connection to a seemingly different field of hash/MAC-based (adversarial) error detection using the theory of Combinatorial Group Testing (CGT). We show some consequences of this discovery. In particular, we show that some existing proposals in the field of CGT-based error detection can be converted into a PPH for the Hamming distance property, and they immediately improve and generalize Boyle et al.'s hash-based PPH proposal. We also show that the idea of Boyle et al. is useful in the context of a variant of CGT problem.
Pratyush Ranjan Tiwari, Matthew Green
In this work we study and formalize security notions for algorithm substitution attacks (ASAs) on cryptographic puzzles. Puzzles are difficult problems that require an investment of computation, memory or some other related resource. They are heavily used as a building block for the consensus networks used by cryptocurrencies, where they include primitives such as proof-of-work, proof-of-space, and verifiable delay functions (VDFs). Due to economies of scale, these networks increasingly rely on a small number of companies to construct opaque hardware or software (e.g., GPU or FPGA images): this dependency raises concerns about cryptographic subversion. Unlike the algorithms considered by previous ASAs, cryptographic puzzles do not rely on secret keys and thus enable a very different set of attacks. We first explore the threat model for these systems and then propose concrete attacks that (1) selectively reduce a victim's solving capability (e.g., hashrate) and (2) exfiltrate puzzle solutions to an attacker. We then propose defenses, several of which can be applied to existing cryptocurrency hardware with minimal changes. Given that these attacks are relevant to all proof of work cryptocurrencies that have a combined market capitalization around a $1 trillion USD (March, 2022), we recommend that all vulnerable mining protocols consider making the suggested adaptations today.
Debrup Chakraborty, Samir Kundu
{\sf TrCBC} is a variant of CBC-MAC which appeared in {\em Information Processing Letters}, 112(7):302-307, 2012. The authors claimed {\sf TrCBC} to be a secure message authentication code (MAC) with some interesting properties. If ${\sf TrCBC}$ is instantiated with a block cipher with block length $n$, then it requires $\lceil \lambda /n \rceil$ block cipher calls for authenticating a $\lambda$-bit message and requires a single key, which is the block cipher key. We show that with high probability, an adversary can forge {\sf TrCBC} with just three queries. The attack that we show can be applied to forge a large class of messages.
Jesús-Javier Chi-Domínguez, Víctor Mateu, Lucas Pandolfo Perin
We analyze and implement the SIDH PoK-based construction from De Feo, Dobson, Galbraith, and Zobernigl. We improve the SIDH-PoK built-in functions to allow an efficient constant-time implementation. After that, we combine it with Fiat-Shamir transform to get an SIDH PoK-based signature scheme that we short label as SIDH-sign. We suggest SIDH-sign-p377, SIDH-sign-p546, and SIDH-sign-p697 as instances that provide security compared to NIST L1, L3, and L5. To the best of our knowledge, the three proposed instances provide the best performance among digital signature schemes based on isogenies.
22 April 2022
-
Event date: to
Submission deadline: 1 October 2022
Submission deadline: 1 October 2022
Austin,TX, USA, 25 September - 28 September 2022
Event date: 25 September to 28 September 2022
Submission deadline: 20 May 2022
Notification: 15 July 2022
Submission deadline: 20 May 2022
Notification: 15 July 2022
-
Event date: to
Submission deadline: 1 November 2022
Submission deadline: 1 November 2022
Virtual event, Anywhere on Earth, 23 September 2022
Event date: 23 September 2022
Submission deadline: 10 June 2022
Notification: 15 July 2022
Submission deadline: 10 June 2022
Notification: 15 July 2022
-
Event date: to
Submission deadline: 2 May 2022
Notification: 1 December 2022
Submission deadline: 2 May 2022
Notification: 1 December 2022
Institute for Advancing Intelligence, TCG-CREST
Institute for Advancing Intelligence (IAI) of TCG-CREST is offering Ph.D. program in broad areas of Computer Science and Mathematics with a special focus on Cryptography and Security, Quantum Information and Quantum Cryptography, Mathematical Applications, Artificial Intelligence and Machine Learning.
All students who passed or are in their final year pursuing M.Tech / M.E / M.Sc / M.Stat / M.Math or equivalent degree in Computer Science, Information Technology, Electronics, Mathematics, Statistics, Data Science, Physics or other areas of Quantitative Sciences may apply. The degree will be awarded in collaboration with either Chennai Mathematical Institute (CMI) or Ramakrishna Mission Vivekananda Educational and Research Institute (RKMVERI) or NIT Durgapur.
Candidates will be selected based on their performance in a written test followed by an interview. The written test will be held on 28th May, 2022 and the session will start in August 2022. The online application portal is open until 15th May, 2022. Interested candidates are requested to apply online by filling up the application form provided on the admission page (https://www.tcgcrest.org/iai-admission-2022/ ). All the necessary details regarding our admissions process are available therein.
Closing date for applications:
Contact: Dr. Avijit Dutta +91 70035 59134 /avijit.dutta@tcgcrest.org https://www.tcgcrest.org/people/avijit-dutta/
More information: https://www.tcgcrest.org/iai-admission-2022/
Apple
Passionate about cryptography? Want to work on designing, reviewing and implementing cryptography to solve impactful security and privacy problems?
Follow the link or contact me directly!
Closing date for applications:
Contact: Yannick Sierra at apple.com
More information: https://jobs.apple.com/en-us/details/200312812/cryptographic-engineer
Catinca Mujdei, Arthur Beckers, Jose Bermundo, Angshuman Karmakar, Lennert Wouters, Ingrid Verbauwhede
Polynomial multiplication algorithms such as Toom-Cook and the Number Theoretic Transform are fundamental building blocks for lattice-based post-quantum cryptography. In this work, we present correlation power analysis-based side-channel analysis methodologies targeting every polynomial multiplication strategy for all lattice-based post-quantum key encapsulation mechanisms in the final round of the NIST post-quantum standardization procedure. We perform practical experiments on real side-channel measurements demonstrating that our method allows to extract the secret key from all lattice-based post-quantum key encapsulation mechanisms. Our analysis demonstrates that the used polynomial multiplication strategy can significantly impact the time complexity of the attack.
Daniel J. Bernstein
This paper reviews, from bottom to top, a polynomial-time algorithm to correct t errors in classical binary Goppa codes defined by squarefree degree-t polynomials. The proof is factored through a proof of a simple Reed--Solomon decoder, and the algorithm is simpler than Patterson's algorithm. All algorithm layers are expressed as Sage scripts. The paper also covers the use of decoding inside the Classic McEliece cryptosystem, including reliable recognition of valid inputs.
Katharina Boudgoust, Corentin Jeudy, Adeline Roux-Langlois, Weiqiang Wen
The Module Learning With Errors problem (M-LWE) is a core computational assumption of lattice-based cryptography which offers an interesting trade-off between guaranteed security and concrete efficiency. The problem is parameterized by a secret distribution as well as an error distribution. There is a gap between the choices of those distributions for theoretical hardness results (standard formulation of M-LWE,
i.e., uniform secret modulo $q$ and Gaussian error) and practical schemes (small bounded secret and error). In this work, we make progress towards narrowing this gap. More precisely, we prove that M-LWE with $\eta$-bounded secret for any $2 \leq \eta \ll q$ and Gaussian error, in both its search and decision variants, is at least as hard as the standard formulation of M-LWE, provided that the module rank $d$ is at least logarithmic in the ring degree $n$. We also prove that the search version of M-LWE with large uniform secret and uniform $\eta$-bounded error is at least as hard as the standard M-LWE problem, if the number of samples $m$ is close to the module rank $d$ and with further restrictions on $\eta$. The latter result can be extended to provide the hardness of M-LWE with uniform $\eta$-bounded secret and error under specific parameter conditions.
Aron Gohr, Friederike Laus, Werner Schindler
In this paper we present our solution to the CHES Challenge 2020, the task of which it was to break masked hardware respective software implementations of the lightweight cipher Clyde by means of side-channel analysis. We target the secret cipher state after processing of the first Sbox layer. Using the provided trace data we obtain a strongly biased posterior distribution for the secret-shared cipher state at the targeted point; this enables us to see exploitable biases even before the secret sharing based masking. These biases on the unshared state can be evaluated one $S$-box at a time and combined across traces, which enables us to recover likely key hypotheses $S$-box by $S$-box.
In order to see the shared cipher state, we employ a deep neural network similar to the one used by Gohr, Jacob and Schindler to solve the CHES 2018 AES challenge. We modify their architecture to predict the exact bit sequence of the secret-shared cipher state. We find that convergence of training on this task is unsatisfying with the standard encoding of the shared cipher state and therefore introduce a different encoding of the prediction target, which we call the scattershot encoding. In order to further investigate how exactly the scattershot encoding helps to solve the task at hand, we construct a simple synthetic task where convergence problems very similar to those we observed in our side-channel task appear with the naive target data encoding but disappear with the scattershot encoding.
We complete our analysis by showing results that we obtained with a classical method (as opposed to an AI-based method), namely the stochastic approach, that we generalize for this purpose first to the setting of shared keys. We show that the neural network draws on a much broader set of features, which may partially explain why the neural-network based approach massively outperforms the stochastic approach. On the other hand, the stochastic approach provides insights into properties of the implementation, in particular the observation that the $S$-boxes behave very different regarding the easiness respective hardness of their prediction.
In order to see the shared cipher state, we employ a deep neural network similar to the one used by Gohr, Jacob and Schindler to solve the CHES 2018 AES challenge. We modify their architecture to predict the exact bit sequence of the secret-shared cipher state. We find that convergence of training on this task is unsatisfying with the standard encoding of the shared cipher state and therefore introduce a different encoding of the prediction target, which we call the scattershot encoding. In order to further investigate how exactly the scattershot encoding helps to solve the task at hand, we construct a simple synthetic task where convergence problems very similar to those we observed in our side-channel task appear with the naive target data encoding but disappear with the scattershot encoding.
We complete our analysis by showing results that we obtained with a classical method (as opposed to an AI-based method), namely the stochastic approach, that we generalize for this purpose first to the setting of shared keys. We show that the neural network draws on a much broader set of features, which may partially explain why the neural-network based approach massively outperforms the stochastic approach. On the other hand, the stochastic approach provides insights into properties of the implementation, in particular the observation that the $S$-boxes behave very different regarding the easiness respective hardness of their prediction.
Pourandokht Behrouz, Panagiotis Grontas, Vangelis Konstantakatos, Aris Pagourtzis, Marianna Spyrakou
We introduce Designated-Verifier Linkable Ring Signatures (DVLRS), a novel cryptographic primitive which combines designated-verifier and linkable ring signatures. Our goal is to guarantee signer ambiguity and provide the capability to the designated verifier to add ‘noise’ using simulated signatures that are publicly verifiable. This increases the privacy of the participants, as it does not allow an adversary to bypass the anonymity provided by ring signatures by using the
content of a message to identify the signer. We model unforgeability, anonymity, linkability and non-transferability for DVLRS and provide a secure construction in the Random Oracle model. Finally, we explore some first applications for our primitive, which revolve around the use case of an anonymous assessment system that also protects the subject of the evaluation, even if the private key is compromised.
Daniel Fallnich, Shutao Zhang, Tobias Gemmeke
Post-quantum cryptography addresses the increasing threat that quantum computing poses to modern communication systems. Among the available "quantum-resistant" systems, the Niederreiter cryptosystem is positioned as a conservative choice with strong security guarantees. As a code-based cryptosystem, the Niederreiter system enables high performance operations and is thus ideally suited for applications such as the acceleration of server workloads. However, until now, no ASIC architecture is available for low latency computation of Niederreiter operations. Therefore, the present work targets the design, implementation and optimization of tailored archi- tectures for low latency Niederreiter decryption. Two architectures utilizing different decoding algorithms are proposed and implemented using a 22nm FDSOI CMOS technology node. One of these optimized architectures improves the decryption latency by 27% compared to a state-of-the-art reference and requires at the same time only 25% of the area.
Leizhang Wang, Wenwen Xia, Geng Wang, Baocang Wang, Dawu Gu
The General Sieve Kernel (G6K) implemented a variety of lattice reduction algorithms based on sieving algorithms. One of the representative of these lattice reduction algorithms is Pump and jump-BKZ (pnj-BKZ) algorithm which is currently considered as the fastest lattice reduction algorithm. The pnj-BKZ is a BKZ-type lattice reduction algorithm which includes the jump strategy, and uses Pump as the SVP Oracle. Here, Pump which was also proposed in G6K, is an SVP sloving algorithm that combines progressive sieve technology and dimforfree technology. However unlike classical BKZ, there is no simulator for predicting the behavior of the pnj-BKZ algorithm when jump greater than 1, which is helpful to find a better lattice reduction strategy. There are two main differences between pnj-BKZ and the classical BKZ algorithm: one is that after pnj-BKZ performs the SVP Oracle on a certain projected sublattice, it won't calling SVP Oracle for the next nearest projected sublattice. Instead, pnj-BKZ jumps to the corresponding projected sublattice after J indexs to run the algorithm for solving the SVP. By using this jump technique, the number of times that the SVP algorithm needs to be called for each round of pnj-BKZ will be reduced to about 1/J times of original. The second is that pnj-BKZ uses Pump as the SVP Oracle on the projected sublattice. Based on the BKZ2.0 simulator, we proposes a pnj-BKZ simulator by using the properties of HKZ reduction basis. Experiments show that our proposed pnj-BKZ simulator can well predicate the behavior of pnj-BKZ with jump greater than 1. Besides, we use this pnj-BKZ simulator to give the optimization strategy for choosing jump which can improve the reducing efficiency of pnj-BKZ. Our optimized pnj-BKZ is 2.9 and 2.6 times faster in solving TU LWE challenge ( n=75,alpha=0.005 ) and TU LWE challenge ( n=60,alpha=0.010 ) than G6K's default LWE sloving strategy.
Arnaud de Grandmaison, Karine Heydemann, Quentin L. Meunier
Side channel attacks are powerful attacks for retrieving secret data by exploiting physical measurements such as power consumption or electromagnetic emissions. Masking is a popular countermeasure as it can be proven secure against an attacker model. In practice, software masked implementations suffer from a security reduction due to a mismatch between the considered leakage sources in the security proof and the real ones, which depend on the micro-architecture.
We present the model of a system comprising an Arm Cortex-M3 obtained from its RTL description and test-vectors, as well as a model of the memory of a STM32F1 board, built exclusively using test-vectors. Based on these models, we propose Armistice, a framework for formally verifying the absence of leakage in first-order masked implementations taking into account the modelled micro-architectural sources of leakage. We show that Armistice enables to pinpoint vulnerable instructions in real world masked implementations and helps design masked software implementations which are practically secure.