IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
03 September 2021
Daniel R. L. Brown
Some key agreement schemes, such as Diffie--Hellman key agreement, reduce to Rabi--Sherman key agreement, in which Alice sends $ab$ to Charlie, Charlie sends $bc$ to Alice, they agree on key $a(bc) = (ab)c$, where multiplicative notation here indicates some specialized associative binary operation.
All non-interactive key agreement schemes, where each peer independently determines a single delivery to the other, reduce to this case, because the ability to agree implies the existence of an associative operation. By extending the associative operations domain, the key agreement scheme can be enveloped into a mathematical ring, such that all cryptographic values are ring elements, and all key agreement computations are ring multiplications. (A smaller envelope, a semigroup instead of a ring, is also possible.)
Security relies on the difficulty of division: here, meaning an operator $/$ such that $((ab)/b)b = ab$. Security also relies on the difficulty of the less familiar wedge operation $[ab, b, bc] \mapsto abc$.
When Rabi--Sherman key agreement is instantiated as Diffie--Hellman key agreement: its multiplication amounts to modular exponentiation; its division amounts to the discrete logarithm problem; the wedge operation amounts to the computational Diffie--Hellman problem.
Ring theory is well-developed and implies efficient division algorithms in some specific rings, such as matrix rings over fields. Semigroup theory, though less widely-known, also implies efficient division in specific semigroups, such as group-like semigroups.
The rarity of key agreement schemes with well-established security suggests that easy multiplication with difficult division (and wedges) is elusive.
Reduction of key agreement to ring or semigroup multiplication is not a panacea for cryptanalysis. Nonetheless, novel proposals for key agreement perhaps ought to run the gauntlet of a checklist for vulnerability to well-known division strategies that generalize across several forms of multiplication. Ambitiously applying this process of elimination to a plethora of diverse rings or semigroups might also, if only by a fluke, leave standing a few promising schemes, which might then deserve a more focused cryptanalysis.
All non-interactive key agreement schemes, where each peer independently determines a single delivery to the other, reduce to this case, because the ability to agree implies the existence of an associative operation. By extending the associative operations domain, the key agreement scheme can be enveloped into a mathematical ring, such that all cryptographic values are ring elements, and all key agreement computations are ring multiplications. (A smaller envelope, a semigroup instead of a ring, is also possible.)
Security relies on the difficulty of division: here, meaning an operator $/$ such that $((ab)/b)b = ab$. Security also relies on the difficulty of the less familiar wedge operation $[ab, b, bc] \mapsto abc$.
When Rabi--Sherman key agreement is instantiated as Diffie--Hellman key agreement: its multiplication amounts to modular exponentiation; its division amounts to the discrete logarithm problem; the wedge operation amounts to the computational Diffie--Hellman problem.
Ring theory is well-developed and implies efficient division algorithms in some specific rings, such as matrix rings over fields. Semigroup theory, though less widely-known, also implies efficient division in specific semigroups, such as group-like semigroups.
The rarity of key agreement schemes with well-established security suggests that easy multiplication with difficult division (and wedges) is elusive.
Reduction of key agreement to ring or semigroup multiplication is not a panacea for cryptanalysis. Nonetheless, novel proposals for key agreement perhaps ought to run the gauntlet of a checklist for vulnerability to well-known division strategies that generalize across several forms of multiplication. Ambitiously applying this process of elimination to a plethora of diverse rings or semigroups might also, if only by a fluke, leave standing a few promising schemes, which might then deserve a more focused cryptanalysis.
31 August 2021
Tim Beyne, Siemen Dhooghe, Adrián Ranea, Danilo Sijačić
We propose a second-order masking of the AES in hardware that requires an order of magnitude less random bits per encryption compared to previous work.
The design and its security analysis are based on recent results by Beyne et al. from Asiacrypt 2020. Applying these results to the AES required overcoming significant engineering challenges by introducing new design techniques. Since the security analysis is based on linear cryptanalysis, the masked cipher needs to have sufficient diffusion and the S-box sharing must be highly nonlinear. Hence, in order to apply the changing of the guards technique, a detailed study of its effect on the diffusion of the linear layer becomes important. The security analysis is automated using an SMT solver. Furthermore, we propose a sharpening of the glitch-extended probing model that results in improvements to our concrete security bounds. Finally, it is shown how to amortize randomness costs over multiple evaluations of the masked cipher.
Barbara Gigerl, Robert Primas, Stefan Mangard
Physical side-channel attacks like power analysis pose a serious threat to cryptographic devices in real-world applications. Consequently, devices implement algorithmic countermeasures like masking.
In the past, works on the design and verification of masked software implementations have mostly focused on simple microprocessors that find usage on smart cards. However, many other applications such as in the automotive industry require side-channel protected cryptographic computations on much more powerful CPUs. In such situations, the security loss due to complex architectural side-effects, the corresponding performance degradation, as well as discussions of suitable probing models and verification techniques are still vastly unexplored research questions.
We answer these questions and perform a comprehensive analysis of more complex processor architectures in the context of masking-related side effects. First, we analyze the RISC-V SweRV core featuring a 9-stage pipeline, two execution units, and load/store buffers and point out a significant gap between security in a simple software probing model and practical security on such CPUs. More concretely, we show that architectural side effects of complex CPU architectures can significantly reduce the protection order of masked software, both via formal analysis in the hardware probing model, as well as empirically via gate-level timing simulations. We then discuss the options of fixing these problems in hardware or leaving them as constraints to software. Based on these software constraints, we formulate general rules for the design of masked
software on more complex CPUs. Finally, we compare several implementation strategies for masking schemes and present in a case study that designing secure masked software for complex CPUs is still possible with overhead as low as 13%.
Philipp Muth, Fabio Campos
We present an actively secure threshold scheme in the setting of Hard Homogenous Spaces (HHS) which allows fine-grained access structures. More precisely, we elevate a given passively secure isogeny based threshold scheme to an actively secure setting. We prove the active security and simulatability of our advanced schemes. By defining some characterising properties, we are able to expand the range of secret sharing schemes which support the given scheme. Furthermore, we show that Shamirs scheme has our generalised properties, and thereby our approach truly represents a less restrictive generalisation.
Marcel Hollenstein, David Naccache, Peter B. Roenne, Peter Y A Ryan, Robert Weil, Ofer Yifrach-Stav
As humanity struggles to contain the global COVID pandemic, privacy concerns are emerging regarding confinement, tracing and testing.
The scientific debate concerning privacy of the COVID tracing efforts has been intense, especially focusing on the choice between centralised and decentralised tracing apps. The privacy concerns regarding COVID \underline{testing}, however, have not received as much attention even though the privacy at stake is arguably even higher. COVID tests require the collection of samples. Those samples possibly contain viral material but inevitably also human DNA. Patient DNA is not necessary for the test but it is technically impossible to avoid collecting it. The unlawful preservation, or misuse, of such samples at a massive scale may hence disclose patient DNA information with far-reaching privacy consequences.
Inspired by the cryptographic concept of ``Indistinguishability under Chosen Plaintext Attack'', this paper poses the blueprint of novel types of tests allowing to detect viral presence without leaving persisting traces of the patient's DNA.
The scientific debate concerning privacy of the COVID tracing efforts has been intense, especially focusing on the choice between centralised and decentralised tracing apps. The privacy concerns regarding COVID \underline{testing}, however, have not received as much attention even though the privacy at stake is arguably even higher. COVID tests require the collection of samples. Those samples possibly contain viral material but inevitably also human DNA. Patient DNA is not necessary for the test but it is technically impossible to avoid collecting it. The unlawful preservation, or misuse, of such samples at a massive scale may hence disclose patient DNA information with far-reaching privacy consequences.
Inspired by the cryptographic concept of ``Indistinguishability under Chosen Plaintext Attack'', this paper poses the blueprint of novel types of tests allowing to detect viral presence without leaving persisting traces of the patient's DNA.
Fanliang Hu, Huanyu Wang, Junnian Wang
Deep Learning Side-Channel Attacks (DLSCAs) have become a realistic threat to implementations of cryptographic algorithms, such as Advanced Encryption Standard (AES). By utilizing deep-learning models to analyze side-channel measurements, the attacker is able to derive the secret key of the cryptographic alrgorithm. However, when traces have multiple leakage intervals for a specific attack point, the majority of existing works train neural networks on these traces directly, without a preprocess step for each leakage interval. This degenerates the quality of profiling traces due to noise and non-primary components. In this paper, we first divide the multi-leaky traces into leakage intervals and train models on different intervals separately. Afterwards, we concatenate these neural networks to build the final network, which is called multi-input model. We test the proposed multi-input model on traces captured from STM32F3 microcontroller implementations of AES-128 and show a 2-fold improvement over the previous single-input attacks.
Eric Brier, Rémi Géraud-Stewart, Marc Joye, David Naccache
Higher-order power residues have enabled the construction of numerous public-key encryption schemes, authentication schemes, and digital signatures. Their explicit characterization is however challenging; an algorithm of Caranay and Scheidler computes $p$-th power residue symbols, with $p \le 13$ an odd prime, provided that primary elements in the corresponding cyclotomic field can be efficiently found.
In this paper, we describe a new, generic algorithm to compute primary elements in cyclotomic fields; which we apply for $p=3,5,7,11,13$. A key insight is a careful selection of fundamental units as put forward by D\'enes.
This solves an essential step in the Caranay--Scheidler algorithm. We give a unified view of the problem. Finally, we provide the first efficient deterministic algorithm for the computation of the 9-th and 16-th power residue symbols.
In this paper, we describe a new, generic algorithm to compute primary elements in cyclotomic fields; which we apply for $p=3,5,7,11,13$. A key insight is a careful selection of fundamental units as put forward by D\'enes.
This solves an essential step in the Caranay--Scheidler algorithm. We give a unified view of the problem. Finally, we provide the first efficient deterministic algorithm for the computation of the 9-th and 16-th power residue symbols.
Zhen Shi, Chenhui Jin, Yu Jin
Abstract. in this paper, we improve the results of linear approximation of SNOW-V and SNOW-Vi.We optimized the automatic search program by replacing the S-box part with accurate characterizations of the Walsh spectral of S-boxes, which results in a series of trails with higher correlations. On the basis of existing results, we investigate the common features of linear approximation trails with high correlation, and search for more trails by exhausting free masks. By summing up the correlations of trails with the same input and output masks, we get closer to the real correlation. As a result, we get a linear approximation with a correlation -2^{-47.76},which results in a correlation attack on SNOW-V and SNOW-Vi with a time complexity 2^{246:53}, data complexity 2^{237.5} and memory complexity 2^{238.77}.
Fukang Liu, Willi Meier, Santanu Sarkar, Gaoli Wang, Ryoma Ito, Takanori Isobe
ZUC-256 is a stream cipher designed for 5G applications and is currently being under evaluation for standardized algorithms in 5G mobile telecommunications by Security Algorithms Group of Experts (SAGE). A notable feature of the round update function of ZUC-256 is that many operations are defined over different fields, which significantly increases the difficulty to analyze the algorithm. In this paper, we develop new techniques to carefully control the interactions between these operations defined over different fields. Moreover, while the designers expect that only simple input differences can be exploited to mount a practical attack on 27 initialization rounds, which is indeed implied in the 28-round practical attack discovered by Babbage and Maximov, we demonstrate that much more complex input differences can be utilized to achieve practical attacks on more rounds of ZUC-256. At the first glance, our techniques are somewhat similar to that developed by Wang et al. for the MD-SHA hash family. However, as ZUC-256 is quite different from the MD-SHA hash family, we are indeed dealing with different problems and overcoming new obstacles. With the discovered complex input differences, we are able to present the first practical distinguishing attacks on 31 out of 33 rounds of ZUC-256 and 30 out of 33 rounds of the new version of ZUC-256 called ZUC-256-v2, respectively. It is unpredictable whether our attacks can be further extended to more rounds with more advanced techniques. Based on the current attacks, we believe that the full 33 initialization rounds are marginal.
David Gerault, Thomas Peyrin, Quan Quan Tan
Automated methods have become crucial components when searching for distinguishers against symmetric-key cryptographic primitives. While MILP and SAT solvers are among the most popular tools to model ciphers and perform cryptanalysis, other methods with different performance profiles are appearing. In this article, we explore the use of Constraint Programming (CP) for differential cryptanalysis on the ASCON authenticated encryption family (first choice of the CAESAR lightweight applications portfolio and current finalist of the NIST LWC competition) and its internal permutation. We first present a search methodology for finding differential characteristics for ASCON with CP, which can easily find the best differential characteristics already reported by the ASCON designers. This shows the capability of CP in generating easily good differential results compared to dedicated search heuristics. Based on our tool, we also parametrize the search strategies in CP to generate other differential characteristics with the goal of forming limited-birthday distinguishers for 4, 5, 6 and 7 rounds and rectangle attacks for 4 and 5 rounds of the ASCON internal permutation. We propose a categorization of the distinguishers into black-box and non-black-box to better differentiate them as they are often useful in different contexts. We also obtained limited-birthday distinguishers which represent currently the best known distinguishers for 4, 5 and 6 rounds under the category of non-black-box distinguishers. Leveraging again our tool, we have generated forgery attacks against both reduced-rounds ASCON-128 and ASCON-128a, improving over the best reported results at the time of writing. Finally, using the best differential characteristic we have found for 2 rounds, we could also improve a recent attack on round-reduced ASCON-Hash.
Lin You, Wang Cheng, Gengran Hu
Among the various authentication methods, biometrics provide good user friendliness. However, the non-renewability of biometrics leads to the problem that it might be stolen. The emergence of fuzzy extractors is a promising solution to this problem. The fuzzy extractors can extract uniformly distributed keys from various noise random sources (such as biometrics, physical unclonable functions and quantum bits). However, the research on fuzzy extractors mainly focuses on the theoretical level, and does not consider how the extracted biometrics should be coded and implementated. This paper first introduces a
method of feature selection and encoding for fingerprints, together with a secure sketch based on Chebyshev distance
in a rectangular coordinate system. Then we present the construction approach of reusable and robust fuzzy extractors(
rrFE). Meanwhile, we prove that our secure sketch scheme has sufficient security. Finally, we also present the complete experimental process and a demo program, and test the performance of our proposed fuzzy extractors. Compared with other schemes, our scheme has lower storage overhead.
26 August 2021
Tarun Chitra, Guillermo Angeris, Alex Evans
Constant function market makers (CFMMs) are the most popular mechanism for facilitating decentralized trading. While these mechanisms have facilitated hundreds of billions of dollars of trades, they provide users with little to no privacy. Recent work illustrates that privacy cannot be achieved in CFMMs without forcing worse pricing and/or latency on end users. This paper more precisely quantifies the trade-off between pricing and privacy in CFMMs. We analyze a simple privacy-enhancing mechanism called Uniform Random Execution and prove that it provides $(\epsilon, \delta)$-differential privacy. The privacy parameter $\epsilon$ depends on the curvature of the CFMM trading function and the number of trades executed. This mechanism can be implemented in any blockchain system that allows smart contracts to access a verifiable random function. We also investigate the worst case complexity over all private CFMM mechanisms using recent results from private PAC learning. These results suggest that one cannot do much better than Uniform Random Execution in CFMMs with non-zero curvature. Our results provide an optimistic outlook on providing partial privacy in CFMMs.
Lars Folkerts, Charles Gouert, Nektarios Georgios Tsoutsos
Machine learning as a service (MLaaS) has risen to become a prominent technology due to the large development time, amount of data, hardware costs, and level of expertise required to develop a machine learning model. However, privacy concerns prevent the adoption of MLaaS for applications with sensitive data. One solution to preserve privacy is to use fully homomorphic encryption (FHE) to perform the ML computations. FHE has great power to protect sensitive inputs, and recent advancements have lowered computational costs by several orders of magnitude, allowing for practical applications to be developed. This work looks to optimize FHE-based private machine learning inference by leveraging ternary neural networks. Such neural networks, whose weights are constrained to {-1,0,1}, have special properties that we exploit in this work to operate efficiently in the homomorphic domain. We introduce a general framework that takes an input model, performs plaintext training, and efficiently evaluates private inference leveraging FHE. We perform inference experiments with the MNIST, CIFAR-10, and ImageNet datasets and achieve private inference speeds of only 1.7 to 2.7 orders of magnitude slower compared to their plaintext baseline.
Aleksei Udovenko
This work presents techniques for modeling Boolean functions by mixed-integer linear inequalities (MILP) on binary variables in-place (without auxiliary variables), reaching minimum possible number of inequalities for small functions and providing meaningful lower bounds on the number of inequalities when reaching the minimum is infeasible. While the minimum number of inequalities does not directly translate to best performance in MILP applications, it nonetheless provides a useful benchmark. We remark that our framework is heuristic and relies on SAT solvers and MILP optimization and so its feasibility is limited.
Olivier Pereira
Individual verifiability remains one of the main practical challenges in e-voting systems and, despite the central importance of this property, countries that sought to implement it faced repeated security problems.
In this note, we revisit this property in the context of the IVXV version of the Estonian voting system, which has been in used for the Estonian municipal elections of 2017 and for the Estonian and European parliamentary elections of 2019.
We show that a compromised voter device can defeat the individual verifiability mechanism of the current Estonian voting system. Our attack takes advantage of the revoting option that is available in the Estonian voting system, and only requires compromise of the voting client application: it does not require compromising the mobile device verification app, or any server side component.
In this note, we revisit this property in the context of the IVXV version of the Estonian voting system, which has been in used for the Estonian municipal elections of 2017 and for the Estonian and European parliamentary elections of 2019.
We show that a compromised voter device can defeat the individual verifiability mechanism of the current Estonian voting system. Our attack takes advantage of the revoting option that is available in the Estonian voting system, and only requires compromise of the voting client application: it does not require compromising the mobile device verification app, or any server side component.
Ivan Chizhov, Alexandra Davletshina
The paper is devoted to the Hadamard square of concatenated linear codes. Such codes consist of codewords that are obtained by concatenation part of the codewords from other codes. It is proved that if the sum of Hadamard squares dimensions of the codes used in the concatenation is slightly less than the dimension of the entire space, then the Hadamard square of the concatenated code is equal to the Cartesian product of the Hadamard square of code-components.
It means that the cryptanalysis for many code-based post-quantum cryptographic mechanisms built on concatenated codes is equivalent to the cryptanalysis of these mechanisms built on code-components. So using the concatenation of codes from different classes instead of one class of codes, generally speaking, does not increase the cryptographic strength of the mechanisms.
Ignacio Cascudo, Bernardo David, Omer Shlomovits, Denis Varlakov
Many decentralized applications require a common source of randomness that cannot be biased by any single party. Randomness beacons provide such a functionality, allowing any (third) party to periodically obtain random values and verify their validity (i.e. check that they are indeed produced by the beacon and consequently random). Protocols implementing randomness beacons have been constructed via a number of different techniques. In particular, several beacons based on time-based cryptography, Publicly Verifiable Secret Sharing (PVSS), Verifiable Random Functions (VRF) and their threshold variant (TVRF) have been proposed. These protocols provide a range of efficiency/randomness quality trade-offs but guarantee security under different setups, assumptions and adversarial models.
In this work, we propose Mt. Random, a multi-tiered randomness beacon that combines PVSS and (T)VRF techniques in order to provide an optimal efficiency/quality trade-off without sacrificing security guarantees. Each tier is based on a different technique and provides a constant stream of random outputs offering progressing efficiency vs. quality trade-offs: true uniform randomness is refreshed less frequently than pseudorandomness, which in turn is refreshed less frequently than (bounded) biased randomness. This wide span of efficiency/quality allows for applications to consume random outputs from an optimal point in this trade-off spectrum. In order to achieve these results, we construct two new building blocks of independent interest: GULL, a PVSS-based beacon that preprocesses a large batch of random outputs but allows for gradual release of smaller ``sub-batches'', which is a first in the literature of randomness beacons; and a publicly verifiable and unbiasable protocol for Distributed Key Generation protocol (DKG), which is significantly more efficient than most of previous DKGs secure under standard assumptions and closely matches the efficiency of the currently most efficient biasable DKG protocol.
Mt. Random (and all of its building blocks) can be proven secure under the standard DDH assumption (in the random oracle model) using only a bulletin board as setup, which is a requirement for the vast majority of beacons. We showcase the efficiency of our novel building blocks and of the Mt. Random beacon via benchmarks made with a prototype implementation. Our experimental results confirm the benefits of our multi-tiered approach, showing that even though higher tiers provide fresh random outputs more often, lower tiers can be executed fast enough to keep higher tiers freshly seeded.
In this work, we propose Mt. Random, a multi-tiered randomness beacon that combines PVSS and (T)VRF techniques in order to provide an optimal efficiency/quality trade-off without sacrificing security guarantees. Each tier is based on a different technique and provides a constant stream of random outputs offering progressing efficiency vs. quality trade-offs: true uniform randomness is refreshed less frequently than pseudorandomness, which in turn is refreshed less frequently than (bounded) biased randomness. This wide span of efficiency/quality allows for applications to consume random outputs from an optimal point in this trade-off spectrum. In order to achieve these results, we construct two new building blocks of independent interest: GULL, a PVSS-based beacon that preprocesses a large batch of random outputs but allows for gradual release of smaller ``sub-batches'', which is a first in the literature of randomness beacons; and a publicly verifiable and unbiasable protocol for Distributed Key Generation protocol (DKG), which is significantly more efficient than most of previous DKGs secure under standard assumptions and closely matches the efficiency of the currently most efficient biasable DKG protocol.
Mt. Random (and all of its building blocks) can be proven secure under the standard DDH assumption (in the random oracle model) using only a bulletin board as setup, which is a requirement for the vast majority of beacons. We showcase the efficiency of our novel building blocks and of the Mt. Random beacon via benchmarks made with a prototype implementation. Our experimental results confirm the benefits of our multi-tiered approach, showing that even though higher tiers provide fresh random outputs more often, lower tiers can be executed fast enough to keep higher tiers freshly seeded.
Siemen Dhooghe
This paper discusses how to analyze the probing security of masked symmetric primitives against the leakage effects from CHES 2018; glitches, transitions, and coupling effects. This is illustrated on several architectures of ciphers like PRESENT, AES, and ASCON where we transform glitch-extended probing secure maskings into transition and/or coupling secure ones. The analysis uses linear cryptanalytic methods and the diffusion layers of the cipher to efficiently protect against the advanced leakage effects.
Siemen Dhooghe, Svetla Nikova
Threshold Implementations are known countermeasures defending against side-channel attacks via the use of masking techniques. While sufficient properties are known to defend against first-order side-channel attacks, it is not known how to achieve higher-order security. This work generalizes the Threshold Implementation notion of uniformity and proves it achieves second-order protection. The notion is applied to create a second-order masking of the PRESENT cipher with a low randomness cost.
25 August 2021
Yilei Chen, Qipeng Liu, Mark Zhandry
We show polynomial-time quantum algorithms for the following problems:
(*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant.
(*) Extrapolated dihedral coset problem (EDCP) with certain parameters.
(*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions.
The SIS, EDCP, and LWE problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for those variants.
Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contributions are introducing a filtering technique and solving LWE given LWE-like quantum states with interesting parameters.
The SIS, EDCP, and LWE problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for those variants.
Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contributions are introducing a filtering technique and solving LWE given LWE-like quantum states with interesting parameters.