International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

18 January 2022

Nimrod Aviram, Benjamin Dowling, Ilan Komargodski, Kenneth G. Paterson, Eyal Ronen, Eylon Yogev
ePrint Report ePrint Report
The task of combining cryptographic keys, some of which may be maliciously formed, into one key, which is (pseudo)random is a central task in cryptographic systems. For example, it is a crucial component in the widely used TLS and Signal protocols. From an analytical standpoint, current security proofs model such key combiners as dual-PRFs -- a function which is a PRF when keyed by either of its two inputs -- guaranteeing pseudo-randomness if one of the keys is compromised or even maliciously chosen by an adversary.

However, in practice, implementations of key combiners significantly depart from the ``dual-PRF'' standard set by provable schemes. Specifically, existing implementations typically use heuristic approaches and rely on strong assumptions that are only known to be satisfied in ideal models (such as modelling underlying hash functions as random oracles), or which are not justified at all by known security results. We describe several cases of deployed protocols where this is the case, focussing on the use of HKDF as a dual-PRF. Unfortunately, such strong assumptions on cryptographic hash functions tend not to withstand the test of time, often leading to deployed systems that eventually become completely insecure; experience also shows that upgrading already-deployed cryptography from deprecated hash functions to newer ones is a slow process that can take many years. Finally, we consider it sub-optimal that the new hybrid key exchange protocols combining classical and post-quantum key exchanges and that are in the process of development risk more deeply embedding the improper use of key combiners.

In this work, we narrow the gap between theory and practice for key combiners. In particular, we give a construction of a dual-PRF that can be used as a drop-in replacement for current heuristic key combiners in a range of protocols. Our construction follows a theoretical construction by Bellare and Lysyanskaya, and is based on concrete hardness assumptions, phrased in the spirit of one-wayness. Therefore, our construction provides security unless extremely strong attacks against the underlying cryptographic hash function are discovered. Moreover, since these assumptions are considered post-quantum secure, our constructions can safely be used in new hybrid protocols. From a practical perspective, our dual-PRF construction is highly efficient, adding only a negligible overhead of a few microseconds compared to currently used (heuristic) approaches. We believe that our approach exemplifies a perfect middle-ground for practically efficient constructions that are supported by realistic hardness assumptions.
Expand
Françoise Levy-dit-Vehel, Maxime Roméas
ePrint Report ePrint Report
Proofs of Retrievability (PoR) protocols ensure that a client can fully retrieve a large outsourced file from an untrusted server. Good PoRs should have low communication complexity, small storage overhead and clear security guarantees with tight security bounds. The focus of this work is to design good PoR schemes with simple security proofs. To this end, we use the Constructive Cryptography (CC) setting by Maurer [13]. We propose a framework for the design of secure and efficient PoR schemes based on Locally Correctable Codes (LCC). We give a first instantiation of our framework using the high rate lifted codes introduced by Guo et al. [5]. This yields an infinite family of good PoRs. We assert their security by solving a finite geometry problem, giving an explicit formula for the probability of an adversary to fool the client. Using the local correctability properties of Tanner codes, we get another instantiation of our framework and derive an analogous formula for the success probability of the audit.
Expand
Kang Yang, Xiao Wang
ePrint Report ePrint Report
In this paper, we study zero-knowledge (ZK) proofs for circuit satisfiability that can prove to n verifiers at a time efficiently. The proofs are secure against the collusion of a prover and a subset of $t$ verifiers. We refer to such ZK proofs as multi-verifier zero-knowledge (MVZK) proofs, and focus on the case that a majority of verifiers are honest (i.e., $t < n/2$). We construct efficient MVZK protocols where the prover sends one message to each verifier, while the verifiers only exchange one round of messages (or, to achieve information-theoretic security, the number of rounds between verifiers is logarithmic to the circuit size).

In addition to the round complexity, our MVZK protocols in the computational setting achieve particularly high practicality. These protocols are streamable and only require a memory proportional to what is needed to evaluate the circuit in the clear. In terms of communication cost, when $t < n/2$, the prover sends $1/2 + o(1)$ field elements per multiplication gate to every verifier. When the threshold of corrupted verifiers $t < n(1/2 − \epsilon)$ for any $0 < \epsilon < 1/2$, we can further reduce the communication to $O(1/n)$ field elements per multiplication gate per verifier.
Expand
Daniel Escudero
ePrint Report ePrint Report
This text serves as a general guide to secure multiparty computation based on secret-sharing, focusing more on practical aspects of the techniques and constructions rather than their theoretical grounds. It is intended to serve as a

This work in progress currently includes an introduction to several core concepts in secure multiparty computation, an overview of simulation-based security, and detailed constructions for honest and two-thirds honest majority MPC, and also dishonest majority in the preprocessing model.
Expand
Nicu Neculache, Vlad-Andrei Petcu, Emil Simion
ePrint Report ePrint Report
Statistical testing is a mechanism that has been included in various domains or fields, providing a method for making quantitative decisions about a particular sample. The statistical testing plays a big role in selecting and testing random and pseudorandom generators whose output may be used in the field of cryptography, specifically for the encryption, decryption and the keys or sub-keys generation. In this paper we study one of the NIST 800-22 random number generation tests. We give an overview for the statistical testing and its importance for cryptography, then we focus on one of the tests, specifically the Binary Matrix Rank Test. We provide a logical schema and a new code implementation in Python 3. Further we evaluate the test, by running it on a collection of well chosen test samples and gathering the results based on which we do an assumption. More exactly, we validate if the binary sequence input can be classified as random or not depending on the bits density.
Expand
Paul Frixons, María Naya-Plasencia, André Schrottenloher
ePrint Report ePrint Report
In this paper, we study quantum key-recovery attacks on block ciphers. While it is well known that a quantum adversary can generically speed up an exhaustive search of the key, much less is known on how to use specific vulnerabilities of the cipher to accelerate this procedure. In this context, we show how to convert classical boomerang and mixing boomerang attacks into efficient quantum key-recovery attacks. In some cases, we can even obtain a quadratic speedup, the same as simple differential attacks. We apply this technique to a 5-round attack on SAFER++.
Expand
Kaiyi Zhang, Hongrui Cui, Yu Yu
ePrint Report ePrint Report
Hash-based signatures offer a conservative alternative to post-quantum signatures with arguably better-understood security than other post-quantum candidates. Nevertheless, a major drawback that makes it less favorable to deploy in practice is the (relatively) large size of the signatures, and long signing and verification time.

In this paper, we introduce SPHINCS-$\alpha$, a stateless hash-based signature scheme, which benefits from a twofold improvement. First, we provide an improved Winternitz one-time signature with an efficient size-optimal encoding, which might be of independent interest. Second, we give a variant of the few-time signature scheme, FORC, by applying the Winternitz method. Plugging the two improved components into the framework of the state-of-the-art (stateless) hash-based SPHINCS$^+$, with carefully chosen parameter choices, yields a certain degree of performance improvement. In particular, under the ``small'' series parameter set aiming for compact signatures, our scheme reduces signature size and signing time by 8-11% and 3-15% respectively, compared to SPHINCS$^+$ at all security levels. For the ``fast'' series that prioritizes computation time, our scheme exhibits a better performance in general. E.g., when instantiating the simple tweakable hash function with SHA-256, our scheme reduces the signing and verification time by 7-10% and up to 10% respectively, while keeping roughly the same signature size. The security proofs/estimates follow the framework of SPHINCS$^+$. To facilitate a fair comparison, we give the implementation of SPHINCS-$\alpha$ by adapting that of SPHINCS$^+$, and we provide a theoretical estimate in the number of hash function calls.
Expand
Daniel Heinz, Matthias J. Kannwischer, Georg Land, Thomas Pöppelmann, Peter Schwabe, Daan Sprenkels
ePrint Report ePrint Report
In this work, we present a fast and first-order secure Kyber implementation optimized for ARM Cortex-M4. Most notably, to our knowledge this is the first liberally-licensed open-source Cortex-M4 implementation of masked Kyber. The ongoing NIST standardization process for post-quantum cryptography and newly proposed side-channel attacks have increased the demand for side-channel analysis and countermeasures for the finalists. On the foundation of the commonly used PQM4 project, we make use of the previously presented optimizations for Kyber on a Cortex-M4 and further combine different ideas from various recent works to achieve a better performance and improve the security in comparison to the original implementations. We show our performance results for first-order secure implementations. Our masked Kyber768 decapsulation on the ARM Cortex-M4 requires only 2 978 441 cycles, including randomness generation from the internal RNG. We then practically verify our implementation by using the t-test methodology with 100 000 traces.
Expand
Morgane Guerreau, Ange Martinelli, Thomas Ricosset, Mélissa Rossi
ePrint Report ePrint Report
Falcon is a very efficient and compact lattice-based signature finalist of the NIST's Post-Quantum standardization campaign. This work assesses Falcon's side-channel resistance by analyzing two vulnerabilities, namely the pre-image computation and the trapdoor sampling. The first attack is an improvement of Karabulut and Aysu (DAC 2021). It overcomes several difficulties inherent to the structure of the stored key like the Fourier representation and directly recovers the key with a limited number of traces and a reduced complexity. The main part of this paper is dedicated to our second attack: we show that a simple power analysis during the signature execution could provide the exact value of the output of a subroutine called the base sampler. This intermediate value does not directly lead to the secret and we had to adapt the so-called hidden parallelepiped attack initially introduced by Nguyen and Regev in Eurocrypt 2006 and reused by Ducas and Nguyen in Asiacrypt 2012. We extensively quantify the resources for our attacks and experimentally demonstrate them with Falcon's reference implementation on the ELMO simulator (McCann, Oswald and Whitnall USENIX 2017) and on a ChipWhisperer Lite with STM32F3 target (ARM Cortex M4). While the success of these attacks may be unsurprising because the reference implementation does not claim any side-channel protection, these new attacks highlight the need for side-channel protection for one of the three finalists of NIST's standardization campaign by pointing out the vulnerable parts and quantifying the resources of the attacks.
Expand
Itay Tsabary, Alex Manuskin, Ittay Eyal
ePrint Report ePrint Report
Smart-contract ledger platforms, like Ethereum, rate-limit their workload with incentives. Users issue orders, called transactions, with assigned fees, and system operators, called miners, confirm them and receive the fees. The combination of limited throughput and varying demand results in a volatile fee market, where under-paying transactions are not confirmed. However, the security of prominent smart contracts, securing billions of dollars, critically relies on their transactions being confirmed in specific, future time frames. Despite theoretical and practical active efforts, guaranteeing timely confirmation remained an open problem.

We present LedgerHedger, a mechanism for assuring that a miner will confirm a user's transaction in a target time frame. As the name implies, LedgerHedger employs hedging~-- the user pays for her transaction in advance and the miner commits to confirm it even if the required fee rises. But unlike regulated markets, there are no external enforcers, and miners unilaterally choose which transactions to confirm. Due to the amounts at stake, relying on miner altruism does not suffice.

Therefore, LedgerHedger uses a combination of collateral deposits to incentivize correct behavior. The contract requires the issuer to deposit her payment and the miner to deposit a collateral. During the target time frame, the miner is incentivized to confirm the issuer's transaction if it exists, but is also capable of withdrawing the payment and the collateral if not.

LedgerHedger gives rise to a game, where the parties can only take specific actions. For a wide range of parameter values there is a subgame perfect equilibrium where both parties act as desired. We implement LedgerHedger and deploy it on an Ethereum test network, showing its efficacy and minor overhead.
Expand
Xiaokang Dai, Wenyuan Wu, Yong Feng
ePrint Report ePrint Report
For Multi-key Fully Homomorphic scheme(MKFHE) based on the Learning With Error(LWE) problem, in order to enable multi-key function, ciphertext expansion is required. In order to achieve ciphertext expansion, the random matrix used in encryption must be encrypted. For an boolean circuit with input length $N$, multiplication depth $L$, security parameter $\lambda$ , the number of additional encryptions introduced to achieve ciphertext expansion is $O(N\lambda^6L^4)$, which is a lot of overhead for computationally sensitive local users. In order to alleviate this overhead, we proposed a weak version of the MKFHE, using the leakage resilient property of Leftover Hash Lemma(LHL), the first weak version of the MKFHE scheme is constructed under plain model. The total private key is the sum of all participant keys. We note that previous MKFHE schemes with this key structure are all based on Common Reference String Model(CRS). Our scheme is simpler and more efficient in construction: we don’t need to encrypt the random matrix, so the extra overhead $O(N\lambda^6L^4)$ is reduced to $O(N)$.

For MKFHE based on Ring Learing With Error(RLWE) problem, since the Regularity Lemma on rings does not have the corresponding leakage resilient property, we can only construct the weak-MKFHE scheme under the random oracle model.
Expand
Luca De Feo, Nadia El Mrabet, Aymeric Genêt, Novak Kaluđerović, Natacha Linard de Guertechin, Simon Pontié, Élise Tasso
ePrint Report ePrint Report
We present new side-channel attacks on SIKE, the isogeny-based candidate in the NIST PQC competition. Previous works had shown that SIKE is vulnerable to differential power analysis and pointed to coordinate randomization as an effective countermeasure. We show that coordinate randomization alone is not sufficient, as SIKE is vulnerable to a class of attacks similar to refined power analysis in elliptic curve cryptography, named zero-value attacks. We describe and confirm in the lab two such attacks leading to full key recovery, and analyze their countermeasures.
Expand
Aron Gohr
ePrint Report ePrint Report
The topic of this contribution is the cryptanalytic use of spurious keys, i.e. non-target keys returned by exhaustive key search. We show that the counting of spurious keys allows the construction of distinguishing attacks against block ciphers that are generically expected to start working at (marginally) lower computational cost than is required to find the target key by exhaustive search. We further show that if a brute force distinguisher does return a strong distinguishing signal, fairly generic optimizations to random key sampling will in many circumstances render the cost of detecting the signal massively lower than the cost of exhaustive search.

We then use our techniques to quantitatively characterize various non-Markov properties of round-reduced Speck32/64. We fully compute, for the first time, the ciphertext pair distribution of 3-round Speck32/64 with one input difference $\Delta$ without any approximations and show that it differs markedly from Markov model predictions; we design a perfect distinguisher for the output distribution induced by the same input difference for 5-round Speck32/64 that is efficient enough to process millions of samples on an ordinary PC in a few days; we design a generic two-block known-plaintext distinguisher against Speck32/64 and show that it achieves 58 percent accuracy against 5-round Speck, equivalent e.g. to a linear distinguisher with $\approx 50$ percent bias.

Turning our attention back to differential cryptanalysis, we show that our known-plaintext distinguisher automatically handles the 5-round output distribution induced by input difference $\Delta$ as well as the perfect differential distinguisher, but that no significant additional signal is obtained from knowing the plaintexts. We then apply the known-plaintext brute force distinguisher to 7-round Speck32/64 with fixed input difference $\Delta$, finding that it achieves essentially the same distinguishing advantage as state-of-the-art techniques (neural networks with key averaging). We also show that our techniques can precisely characterize non-Markov properties in longer differential trails for Speck32/64.
Expand
Sourav Das, Zhuolun Xiang, Ling Ren
ePrint Report ePrint Report
In this paper, we present an asynchronous Byzantine reliable broadcast (RBC) protocol with balanced costs and an improved asynchronous verifiable information dispersal (AVID) protocol. Our RBC protocol broadcasts a message $M$ among $n$ nodes with total communication cost $O(n|M|+\kappa n^2)$ and per-node communication cost $O(|M|+\kappa n)$. In contrast, the state-of-the-art reliable broadcast protocol has imbalanced costs where the broadcaster incurs $O(n|M|)$ while other nodes incur a communication cost of $O(|M|+\kappa n)$. We then use our new RBC protocol and additional techniques to design an asynchronous verifiable information dispersal (AVID) protocol with total dispersal cost $O(|M|+\kappa n^2)$ and retrieval cost $O(|M|+\kappa n)$. In our AVID protocol, the clients incur a communication cost of $O(|M|+\kappa n)$ in comparison to $O(|M|+\kappa n\log n)$ cost of prior best AVID protocol that does not require any trusted setup. Moreover, each node in our AVID protocol incurs a storage cost of $O(|M|/n+\kappa)$ bits, whereas in prior best AVID protocol each node incurs a storage cost of $O(|M|/n+\kappa \log n)$ bits. Finally, we present lower bound results on per-node communication cost of any deterministic \rbc\ protocol and the total communication cost of dispersal and retrieval phase in any deterministic \avid\ protocol. Both our balanced RBC and AVID protocols have near-optimal communication costs.
Expand
Weikeng Chen, Thang Hoang, Jorge Guajardo, Attila A. Yavuz
ePrint Report ePrint Report
End-to-end encrypted file-sharing systems enable users to share files without revealing the file contents to the storage servers. However, the servers still learn metadata, including user identities and access patterns. Prior work tried to remove such leakage but relied on strong assumptions. Metal (NDSS '20) is not secure against malicious servers. MCORAM (ASIACRYPT '20) provides confidentiality against malicious servers, but not integrity.

Titanium is a metadata-hiding file-sharing system that offers confidentiality and integrity against malicious users and servers. Compared with MCORAM, which offers confidentiality against malicious servers, Titanium also offers integrity. Experiments show that Titanium is 5x-200x faster or more than MCORAM.
Expand
Asep Muhamad Awaludin, Harashta Tatimma Larasati, Howon Kim
ePrint Report ePrint Report
In this paper, we present a high-speed, unified elliptic curve cryptography (ECC) processor for arbitrary Weierstrass curves over GF(p), which to the best of our knowledge, outperforms other similar works in terms of execution time. Our approach employs the combination of the schoolbook long and Karatsuba multiplication algorithm for the elliptic curve point multiplication (ECPM) to achieve better parallelization while retaining low complexity. In the hardware implementation, the substantial gain in speed is also contributed by our n-bit pipelined Montgomery Modular Multiplier (pMMM), which is constructed from our n-bit pipelined multiplier-accumulators that utilizes digital signal processor (DSP) primitives as digit multipliers. Additionally, we also introduce our unified, pipelined modular adder-subtractor (pMAS) for the underlying field arithmetic, and leverage a more efficient yet compact scheduling of the Montgomery ladder algorithm. The implementation for 256-bit modulus size on the 7-series FPGA: Virtex-7, Kintex-7, and XC7Z020 yields 0.139, 0.138, and 0.206 ms of execution time, respectively. Furthermore, since our pMMM module is generic for any curve in Weierstrass form, we support multi-curve parameters, resulting in a unified ECC architecture. Lastly, our method also works in constant time, making it suitable for applications requiring high speed and SCA-resistant characteristics.
Expand
Maria Eichlseder, Ahmet Can Mert, Christian Rechberger, Markus Schofnegger
ePrint Report ePrint Report
The concept of lightweight cryptography has gained in popularity recently, also due to various competitions and standardization efforts specifically targeting more efficient algorithms, which are also easier to implement.

One of the important properties of lightweight constructions is the area of a hardware implementation, or in other words, the size of the implementation in a particular environment. Reducing the area usually has multiple advantages like decreased production cost or lower power consumption.

In this paper, we focus on MAC functions and on ASIC implementations in hardware, and our goal is to minimize the area requirements in this setting. For this purpose, we design a new MAC scheme based on the well-known Pelican MAC function. However, in an effort to reduce the size of the implementation, we make use of smaller internal permutations. While this certainly leads to a higher internal collision probability, effectively reducing the allowed data, we show that the full security is still maintained with respect to other attacks, in particular forgery and key recovery attacks. This is useful in scenarios which do not require large amounts of data.

Our detailed estimates, comparisons, and concrete benchmark results show that our new MAC scheme has the lowest area requirements and offers competitive performance. Indeed, we observe an area advantage of up to 30% in our estimated comparisons, and an advantage of around 13% compared to the closest competitor in a concrete implementation.
Expand

17 January 2022

University of Cape Town
Job Posting Job Posting
The School of Economics in the Faculty of Commerce at the University of Cape Town is inviting applications for PhD students and Postdoctoral Research Fellows. The overarching theme of the fellowships is “the data economy” and students and recent graduates with a background in economics, computer science, mathematics, finance, and related disciplines are invited to apply. Any of the following research areas ar of particular interest: The economics of privacy Privacy-preserving technology Data marketplaces The economics of innovation Successful applicants will work within a research group that strives for academic excellence and is interested in various aspects of data ownership, from analyzing its theoretical foundation in the economics of property rights and privacy, to developing practical tools and studying existing applications. In this context, open source software, artistic work, and scholarly research are three examples where at least a weak notion of data ownership can be studied. Applications open now and are considered on a rolling basis until 30 April or until the positions are filled. Appointments are for as soon as is feasible. The tenure of the Postdoctoral fellowship is for up to three years, while the tenure for PhD positions is usually three years, but can be extended for up to five years. Due to the ongoing pandemic, successful applicants can choose to work remotely in 2022. Postdoctoral fellows receive a fellowship of R350,000 per annum and no benefits are included in the value of the fellowship. An additional travel allowance of R30,000 p.a. is available for successful applicants. The successful applicant will be required to comply with the University’s approved policies, procedures and practices for the postdoctoral sector. PhD students receive a fellowship of R210,000 p.a. and an additional travel allowance of R20,000 p.a. in addition to access to university-wide funding for conference travel. We typically arrange for PhD students to spend at least one semester at a leading international university and help organize an internship at a leading international policy institution or fintech company.

Closing date for applications:

Contact: anda.ngcaba@uct.ac.za co-pierre.georg@uct.ac.za

Expand
University of Edinburgh
Job Posting Job Posting
The position is with the Security and Privacy group, part of the School of Informatics at The University of Edinburgh. The Security and Privacy group carries out research on all aspects of security and privacy, including applied and theoretical cryptography, software security, systems and hardware security, human factors, protocol analysis, verification and quantum/post-quantum cryptography. The successful candidate will have a PhD (or be near to completion); experience as an established researcher in one or more areas of security and privacy including but not limited to cryptography, database security, distributed systems security, hardware security, Internet of Things (IoT) security, network security, systems security; enthusiasm to undertake original research including leading a research group; the ability to engage with undergraduate and postgraduate teaching and academic supervision; and the ability to be involved in enhancing eLearning security initiatives.

Closing date for applications:

Contact: Aggelos Kiayias

More information: https://elxw.fa.em3.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1001/job/2980/?utm_medium=jobshare

Expand

16 January 2022

P. Dusart, G. Letourneux, O. Vivolo
ePrint Report ePrint Report
We explain how a differential fault analysis (DFA) works on AES 128, 192 or 256 bits.
Expand
◄ Previous Next ►