International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

14 February 2019
ePrint Report Boomerang Connectivity Table Revisited Ling Song, Xianrui Qin, Lei Hu
The boomerang attack is a variant of differential cryptanalysis which regards a block cipher $E$ as the composition of two sub-ciphers, i.e., $E=E_1\circ E_0$, and which constructs distinguishers for $E$ with probability $p^2q^2$ by combining differential trails for $E_0$ and $E_1$ with probability $p$ and $q$ respectively. However, the validity of this attack relies on the dependency between the two differential trails. Murphy has shown cases where probabilities calculated by $p^2q^2$ turn out to be zero, while techniques such as boomerang switches proposed by Biryukov and Khovratovich give rise to probabilities greater than $p^2q^2$. To formalize such dependency to obtain a more accurate estimation of the probability of the distinguisher, Dunkelman et al. proposed the sandwich framework that regards $E$ as $\tilde{E_1}\circ E_m \circ \tilde{E_0}$, where the dependency between the two differential trails is handled by a careful analysis of the probability of the middle part $E_m$. Recently, Cid et al. proposed the Boomerang Connectivity Table (BCT) which unifies the previous switch techniques and incompatibility together and evaluates the probability of $E_m$ theoretically when $E_m$ is composed of a single S-box layer. In this paper, we revisit the BCT and propose a generalized framework which is able to identify the actual boundaries of $E_m$ which contains dependency of the two differential trails and systematically evaluate the probability of $E_m$ with any number of rounds. To demonstrate the power of this new framework, we apply it to two block ciphers SKNNY and AES. In the application to SKNNY, the probabilities of four boomerang distinguishers are re-evaluated. It turns out that $E_m$ involves 5 or 6 rounds and the probabilities of the full distinguishers are much higher than previously evaluated. In the application to AES, the new framework is used to exclude incompatibility and find high probability distinguishers of AES-128 under the related-subkey setting. As a result, a 6-round distinguisher with probability $2^{-109.42}$ is constructed. Lastly, we discuss the relation between the dependency of two differential trails in boomerang distinguishers and the properties of components of the cipher.
ePrint Report Achieving GWAS with Homomorphic Encryption Jun Jie Sim, Fook Mun Chan, Shibin Chen, Benjamin Hong Meng Tan, Khin Mi Mi Aung
One way of investigating how genes affect human traits would be with a genome-wide association study (GWAS). Genetic markers, known as single-nucleotide polymorphism (SNP), are used in GWAS. This raises privacy and security concerns as these genetic markers can be used to identify individuals uniquely. This problem is further exacerbated by a large number of SNPs needed, which produce reliable results at a higher risk of compromising the privacy of participants.

We describe a method using homomorphic encryption (HE) to perform GWAS in a secure and private setting. This work is based on a semi-parallel logistic regression algorithm proposed to accelerate GWAS computations. Our solution involves homomorphically encrypted matrices and suitable approximations that adapts the original algorithm to be HE-friendly. Our best implementation took $24.70$ minutes for a dataset with $245$ samples, $4$ covariates and $10643$ SNPs.

We demonstrate that it is possible to achieve GWAS with homomorphic encryption with suitable approximations.
ePrint Report Modeling Power Efficiency of S-boxes Using Machine Learning Rajat Sadhukhan, Nilanjan Datta, Debdeep Mukhopadhyay
In the era of lightweight cryptography, designing cryptographically good and power efficient 4x4 S-boxes is a challenging problem. While the optimal cryptographic properties are easy to determine, verifying the power efficiency of an S-box is non-trivial. The conventional approach of determining the power consumption using commercially available CAD-tools is highly time consuming, which becomes formidable while dealing with a large pool of S-boxes. This mandates development of an automation that should quickly characterize the power efficiency from the Boolean function representation of an S-box. In this paper, we present a supervised machine learning assisted automated framework to resolve the problem for 4x4 S-boxes, which turns out to be 14 times faster than traditional approach. The key idea is to extrapolate the knowledge of literal counts, AND-OR-NOT gate counts in SOP form of the underlying Boolean functions to predict the dynamic power efficiency. The experimental results and performance of our novel technique depicts its superiority with high efficiency and low time overhead. We demonstrate effectiveness of our framework by reporting a set of power efficient optimal S-boxes from a large set of S-boxes. We also develop a deterministic model using results obtained from supervised learning to predict the dynamic power of an S-box that can be used in an evolutionary algorithm to generate cryptographically strong and low power S-boxes.
Deep Neural Networks (DNNs) have recently received significant attention in the side-channel community due to their state-of-the-art performance in security testing of embedded systems. However, research on the subject mostly focused on techniques to improve the attack efficiency in terms of the number of traces required to extract secret parameters. What has not been investigated in detail is a constructive approach of DNNs as a tool to evaluate and improve the effectiveness of countermeasures against side-channel attacks. In this work, we try to close this gap by applying attribution methods that aim for interpreting DNN decisions, in order to identify leaking operations in cryptographic implementations. In particular, we investigate three different approaches that have been proposed for feature visualization in image classification tasks and compare them regarding their suitability to reveal Points of Interests (POIs) in side-channel traces. We show by experiments with three separate data sets that Layer-wise Relevance Propagation (LRP) proposed by Bach et al. provides the best result in most cases. Finally, we demonstrate that attribution can also serve as a powerful side-channel distinguisher in DNN-based attack setups.
We study the problem of building SNARKs modularly by linking small specialized "proof gadgets" SNARKs in a lightweight manner. Our motivation is both theoretical and practical. On the theoretical side, modular SNARK designs would be flexible and reusable. In practice, specialized SNARKs have the potential to be more efficient than general-purpose schemes, on which most existing works have focused. If a computation naturally presents different "components" (e.g. one arithmetic circuit and one boolean circuit), a general-purpose scheme would homogenize them to a single representation with a subsequent cost in performance. Through a modular approach one could instead exploit the nuances of a computation and choose the best gadget for each component.

Our contribution is LegoSNARK, a "toolbox" (or framework) for commit-and-prove zkSNARKs (CP-SNARKs) that includes:

1) General composition tools: build new CP-SNARKs from proof gadgets for basic relations simply. 2) A "lifting" tool: add commit-and-prove capabilities to a broad class of existing zkSNARKs efficiently. This makes them interoperable (linkable) within the same computation. For example, one QAP-based scheme can be used prove one component; another GKR-based scheme can be used to prove another. 3) A collection of succinct proof gadgets for a variety of relations.

Additionally, through our framework and gadgets, we are able to obtain new succinct proof systems. Notably:

– LegoGro16, a commit-and-prove version of Groth16 zkSNARK, that operates over data committed with a classical Pedersen vector commitment, and that achieves a 5000X speed in proving time. – LegoUAC, a pairing-based SNARK for arithmetic circuits that has a universal, circuit-independent, CRS, and proving time linear in the number of circuit gates (vs. the recent scheme of Groth et al. (CRYPTO'18) with quadratic CRS and quasilinear proving time).
ePrint Report A General Proof Framework for Recent AES Distinguishers Christina Boura, Anne Canteaut, Daniel Coggia
In this paper, a new framework is developed for proving and adapting the recently proposed multiple-of-8 property and mixture-differential distinguishers. The above properties are formulated as immediate consequences of an equivalence relation on the input pairs, under which the difference at the output of the round function is invariant. This approach provides a further understanding of these newly developed distinguishers. For example, it clearly shows that the branch number of the linear layer does not influence the validity of the property, on the contrary of what was previously believed. We further provide an extension of the mixture-differential distinguishers and multiple-of-8 property to any SPN and to a larger class of subspaces. These adapted properties can then be exhibited in a systematic way for other ciphers than the AES. We illustrate this with the examples of Midori, Klein, LED and Skinny.
How to train a machine learning model while keeping the data private and secure? We present CodedPrivateML, a fast and scalable approach to this critical problem. CodedPrivateML keeps both the data and the model information-theoretically private, while allowing efficient parallelization of training across distributed workers. We characterize CodedPrivateML's privacy threshold and prove its convergence for logistic (and linear) regression. Furthermore, via experiments over Amazon EC2, we demonstrate that CodedPrivateML can provide an order of magnitude speedup (up to $\sim 34\times$) over the state-of-the-art cryptographic approaches.
Stripped Function Logic Locking (SFLL) as the most advanced logic locking technique is robust against both the SAT-based and the removal attacks under the assumption of thorough resynthesis of the stripped function. In this paper, we propose a bit-coloring attack based on our discovery of a critical vulnerability in SFLL. In fact, we show that if only one protected input pattern is discovered, then the scheme can be unlocked with a polynomial number of queries to an activated circuit. As a remedy to this vulnerability, we also propose a provably secure general function that deregularizes the relation between the protected input patterns and the secret key. The mathematical proofs as well as the experiments confirm both the polynomiality of the bit-coloring attack on standard SFLL and the exponentiality of similar attacks on SFLL with general function.
13 February 2019
ePrint Report Unifying Leakage Model on a Rényi Day Dahmun Goudarzi, Ange Martinelli, Alain Passelègue, Thomas Prest
In the last decade, several works have focused on finding the best way to model the leakage in order to obtain provably secure implementations. One of the most realistic models is the noisy leakage model, introduced in [PR13,DDF14] together with secure constructions. These works suffer from various limitations, in particular the use of ideal leak-free gates in [PR13] and an important loss (in the size of the field) in the reduction in [DDF14].

In this work, we provide new strategies to prove the security of masked implementations and start by unifying the different noisiness metrics used in prior works by relating all of them to a standard notion in information theory: the pointwise mutual information. Based on this new interpretation, we define two new natural metrics and analyze the security of known compilers with respect to these metrics. In particular, we prove (1) a tighter bound for reducing the noisy leakage models to the probing model using our first new metric, (2) better bounds for amplification-based security proofs using the second metric.

To support that the improvements we obtain are not only a consequence of the use of alternative metrics, we show that for concrete representation of leakage (e.g, "Hamming weight + Gaussian noise''), our approach significantly improves the parameters compared to prior works. Finally, using the Rényi divergence, we quantify concretely the advantage of an adversary in attacking a block cipher depending on the number of leakage acquisitions available to it.
ePrint Report TEDT, a Leakage-Resilient AEAD mode for High (Physical) Security Applications Francesco Berti, Chun Guo, Olivier Pereira, Thomas Peters, François-Xavier Standaert
We propose TEDT, a new Authenticated Encryption with Associated Data (AEAD) mode leveraging Tweakable Block Ciphers (TBCs). TEDT provides the following features: (i) It offers asymptotically optimal security in the multi-user setting. (ii) It offers nonce misuse-resilience, that is, the repetition of nonces does not impact the security of ciphertexts produced with fresh nonces. (iii) It offers KDM security in the multi-user setting, that is, its security is maintained even if key-dependent messages are encrypted. (iv) It offers full leakage-resilience, that is, it limits the exploitability of physical leakages via side-channel attacks, even if these leakages happen during every message encryption and decryption operation. (v) It can be implemented with a remarkably low energy cost when strong resistance to side-channel attacks is needed, supports online encryption and handles static & incremental associated data efficiently. Concretely, TEDT encourages leveled implementations, in which two TBCs are implemented: one needs strong and energy demanding protections against side-channel attacks but is used in a limited way, while the other only requires weak and energy efficient protections and performs the bulk of the computation. As a result, TEDT leads to considerably more energy efficient implementations compared to traditional AEAD schemes, whose side-channel security requires to uniformly protect every (T)BC execution.
Electronic cash (e-cash) is the digital analogue of regular cash which aims at preserving users' privacy. Following Chaum's seminal work, several new features were proposed for e-cash to address the practical issues of the original primitive. Among them, divisibility has proved very useful to enable efficient storage and spendings. Unfortunately, it is also very difficult to achieve and, to date, only few constructions exist, all of them relying on complex mechanisms that can only be instantiated in one specific setting.

In this work, we study the links between divisible e-cash and constrained pseudo-random functions (PRFs), a primitive recently formalized. We show that one can construct divisible e-cash systems from constrained PRFs achieving some specific properties that we identify. Actually, we provide two frameworks for divisible e-cash that essentially differ in the kind of properties expected from the PRFs. We prove the security of our generic frameworks and provide examples of constrained PRFs satisfying our requirements. Finally, we exhibit a problem in many e-cash systems that invalidates some of their security proofs.
Ring signatures, introduced by [RST01], are a variant of digital signatures which certify that one among a particular set of parties has endorsed a message while hiding which party in the set was the signer. Ring signatures are designed to allow anyone to attach anyone else's name to a signature, as long as the signer's own name is also attached.

But what guarantee do ring signatures provide if a purported signatory wishes to denounce a signed message---or alternatively, if a signatory wishes to later come forward and claim ownership of a signature? Prior security definitions for ring signatures do not give a conclusive answer to this question: under most existing definitions, the guarantees could go either way. That is, it is consistent with some standard definitions that a non-signer might be able to repudiate a signature that he did not produce, or that this might be impossible. Similarly, a signer might be able to later convincingly claim that a signature he produced is indeed his own, or not. Any of these guarantees might be desirable. For instance, a whistleblower might have reason to want to later claim an anonymously released signature, or a person falsely implicated in a crime associated with a ring signature might wish to denounce the signature that is framing them and damaging their reputation. In other circumstances, it might be desirable that even under duress, a member of a ring cannot produce proof that he did or did not sign a particular signature. In any case, a guarantee one way or the other seems highly desirable.

In this work, we formalize definitions and give constructions of the new notions of repudiable, unrepudiable, claimable, and unclaimable ring signatures. Our repudiable construction is based on VRFs, which are implied by several number-theoretic assumptions (including strong RSA or bilinear maps); our claimable construction is a black-box transformation from any standard ring signature scheme to a claimable one; and our unclaimable construction is derived from the lattice-based ring signatures of [BK10], which rely on hardness of SIS. Our repudiable construction also provides a new construction of standard ring signatures.
In (TCC 2017), Hofheinz, Hoevelmanns and Kiltz provided a fine-grained and modular toolkit of generic key encapsulation mechanism (KEM) constructions, which were widely used among KEM submissions to NIST Post-Quantum Cryptography Standardization project. The security of these generic constructions in the quantum random oracle model (QROM) has been analyzed by Hofheinz, Hoevelmanns and Kiltz (TCC 2017), Saito, Xagawa and Yamakawa (Eurocrypt 2018), and Jiang et al. (Crypto 2018). However, the security proofs from standard assumptions are far from tight. In particular, the factor of security loss is $q$ and the degree of security loss is 2, where $q$ is the total number of adversarial queries to various oracles.

In this paper, using semi-classical oracle technique recently introduced by Ambainis, Hamburg and Unruh (ePrint 2018/904), we improve the results in (Eurocrypt 2018, Crypto 2018) and provide tighter security proofs for generic KEM constructions from standard assumptions. More precisely, the factor of security loss $q$ is reduced to be $\sqrt{q}$. In addition, for transformation T that turns a probabilistic public-key encryption (PKE) into a determined one by derandomization and re-encryption, the degree of security loss 2 is reduced to be 1. Our tighter security proofs can give more confidence to NIST KEM submissions where these generic transformations are used, e.g., CRYSTALS-Kyber etc.
Noncommutative cryptography is based on the applications of algebraic structures like noncommutative groups, semigroups and noncommutative rings. Its intersection with Multivariate cryptography contains studies of cryptographic applications of subsemigroups and subgroups of affine Cremona semigroups defined over finite commutative ring K. We consider special semigroups of transformations of the variety (K*)^n, K=F_q or K=Z_m defined via multiplications of variables. Efficiently computed homomorphisms between such subsemigroups can be used in Post Quantum protocols schemes and their inverse versions when correspondents elaborate mutually inverse transformations of (K*)n. The security of these schemes is based on a complexity of decomposition problem for element of the semigroup into product of given generators. So the proposed algorithms are strong candidates for their usage in postquantum technologies.
ePrint Report Leakage Certification Revisited: Bounding Model Errors in Side-Channel Security Evaluations Olivier Bronchain, Julien M. Hendrickx, Clément Massart, Alex Olshevsky, François-Xavier Standaert
Leakage certification aims at guaranteeing that the statistical models used in side-channel security evaluations are close to the true statistical distribution of the leakages, hence can be used to approximate a worst-case security level. Previous works in this direction were only qualitative: for a given amount of measurements available to an evaluation laboratory, they rated a model as "good enough" if the model assumption errors (i.e., the errors due to an incorrect choice of model family) were small with respect to the model estimation errors. We revisit this problem by providing the first quantitative tools for leakage certification. For this purpose, we provide bounds for the (unknown) Mutual Information metric that corresponds to the true statistical distribution of the leakages based on two easy-to-compute information theoretic quantities: the Perceived Information, which is the amount of information that can be extracted from a leaking device thanks to an estimated statistical model, possibly biased due to estimation and assumption errors, and the Hypothetical Information, which is the amount of information that would be extracted from an hypothetical device exactly following the model distribution. This positive outcome derives from the observation that while the estimation of the Mutual Information is in general a hard problem (i.e., estimators are biased and their convergence is distribution-dependent), it is significantly simplified in the case of statistical inference attacks where a target random variable (e.g., a key in a cryptographic setting) has a constant (e.g., uniform) probability. Our results therefore provide a general and principled path to bound the worst-case security level of an implementation. They also significantly speed up the evaluation of any profiled side-channel attack, since they imply that the estimation of the Perceived Information, which embeds an expensive cross-validation step, can be bounded by the computation of a cheaper Hypothetical Information, for any estimated statistical model.
ePrint Report Secure Evaluation of Quantized Neural Networks Assi Barak, Daniel Escudero, Anders Dalskov, Marcel Keller
Machine Learning models, and specially convolutional neural networks (CNNs), are at the heart of many day-to-day applications like image classification and speech recognition. The need for evaluating such models whilst preserving the privacy of the input provided increases as the models are used for more information-sensitive tasks like DNA analysis or facial recognition. Research on evaluating CNNs securely has been very active during the last couple of years, e.g.~Mohassel \& Zhang (S\&P'17) and Liu et al.~(CCS'17), leading to very efficient frameworks like SecureNN (ePrint:2018:442), which can perform evaluation of some CNNs with a multplicative overhead of only $17$--$33$ with respect to evaluation in the clear.

We contribute to this line of research by introducing a technique from the Machine Learning domain, namely quantization, which allows us to scale secure evaluation of CNNs to much larger networks without the accuracy loss that could happen by adapting the network to the MPC setting. Quantization is motivated by the deployment of ML models in resource-constrained devices, and we show it to be useful in the MPC setting as well. Our results show that it is possible to evaluate realistic models---specifically Google's MobileNets line of models for image recognition---within seconds.

Our performance gain can be mainly attributed to two key ingredients: One is the use of the three-party MPC protocol based on replicated secret sharing by Araki et al. (S\&P'17), whose multiplication only requires sending one number per party. Moreover, it allows to evaluate arbitrary long dot products at the same communication cost of a single multiplication, which facilitates matrix multiplications considerably. The second main ingredient is the use of arithmetic modulo $2^{64}$, for which we develop a set of primitives of indepedent interest that are necessary for the quantization like comparison and truncation by a secret shift.
A certificate thumbprint is a hash of a certificate, computed over all certificate data and its signature. Thumbprints are used as unique identifiers for certificates, in applications when making trust decisions, in configuration files, and displayed in interfaces. In this paper we show that thumbprints are not unique in two cases. First, we demonstrate that creating two X.509 certificates with the same thumbprint is possible when the hash function is weak, in particular when chosen-prefix collision attacks are possible. This type of collision attack is now practical for MD5, and expected to be practical for SHA-1 in the near future. Second, we show that certificates may be mauled in a way that they remain valid, but that they have different thumbprints. While these properties may be unexpected, we believe the scenarios where this could lead to a practical attack are limited and require very sophisticated attackers. We also checked the thumbprints of a large dataset of certificates used on the Internet, and found no evidence that would indicate thumbprints of certificates in use today are not unique.
Homomorphic secret sharing (HSS) is an analog of somewhat- or fully homomorphic encryption (S/FHE) to the setting of secret sharing, with applications including succinct secure computation, private manipulation of remote databases, and more. While HSS can be viewed as a relaxation of S/FHE, the only constructions from lattice-based assumptions to date build atop specific forms of threshold or multi-key S/FHE. In this work, we present new techniques directly yielding efficient 2-party HSS for polynomial-size branching programs from a range of lattice-based encryption schemes, without S/FHE. More concretely, we avoid the costly key-switching and modulus-reduction steps used in S/FHE ciphertext multiplication, replacing them with a new distributed decryption procedure for performing "restricted" multiplications of an input with a partial computation value. Doing so requires new methods for handling the blowup of "noise'' in ciphertexts in a distributed setting, and leverages several properties of lattice-based encryption schemes together with new tricks in share conversion. The resulting schemes support a superpolynomial-size plaintext space and negligible correctness error, with share sizes comparable to SHE ciphertexts, but cost of homomorphic multiplication roughly one order of magnitude faster. Over certain rings, our HSS can further support some level of packed SIMD homomorphic operations. We demonstrate the practical efficiency of our schemes within two application settings, where we compare favorably with current best approaches: 2-server private database pattern-match queries, and secure 2-party computation of low-degree polynomials.
Tightly secure cryptographic schemes have been extensively studied in the fields of chosen-ciphertext secure public-key encryption (CCA-secure PKE), identity-based encryption (IBE), signature and more. We extend tightly secure cryptography to inner product functional encryption (IPFE) and present the first tightly secure schemes related to IPFE.

We first construct a new IPFE scheme that is tightly secure in the multi-user and multi-challenge setting. In other words, the security of our scheme does not degrade even if an adversary obtains many ciphertexts generated by many users. Our scheme is constructible on a pairing-free group and secure under the matrix decisional Diffie-Hellman (MDDH) assumption, which is the generalization of the decisional Diffie-Hellman (DDH) assumption. Applying the known conversions by Lin (CRYPTO 2017) and Abdalla et al. (CRYPTO 2018) to our scheme, we can obtain the first tightly secure function-hiding IPFE scheme and multi-input IPFE (MIPFE) scheme respectively.

Our second main contribution is the proposal of a new generic conversion from function-hiding IPFE to function-hiding MIPFE, which was left as an open problem by Abdalla et al. (CRYPTO 2018). We can obtain the first tightly secure function-hiding MIPFE scheme by applying our conversion to the tightly secure function-hiding IPFE scheme described above.

Finally, the security reductions of all our schemes are fully tight, which means that the security of our schemes is reduced to the MDDH assumption with a constant security loss.
ePrint Report Beyond Birthday Bound Secure MAC in Faulty Nonce Model Avijit Dutta, Mridul Nandi, Suprita Talnikar
Encrypt-then-MAC (EtM) is a popular mode for authenticated encryption (AE). Unfortunately, almost all designs following the EtM paradigm, including the AE suites for TLS, are vulnerable against nonce misuse. A single repetition of the nonce value reveals the hash key, leading to a universal forgery attack. There are only two authenticated encryption schemes following the EtM paradigm which can resist nonce misuse attacks, the GCM-RUP (CRYPTO-17) and the GCM/2+ (INSCRYPT-12). However, they are secure only up to the birthday bound in the nonce respecting setting, resulting in a restriction on the data limit for a single key. In this paper we show that nEHtM, a nonce-based variant of EHtM (FSE-10) constructed using a block cipher, has a beyond birthday bound (BBB) unforgeable security that gracefully degrades under nonce misuse. We combine nEHtM with the CENC (FSE-06) mode of encryption using the EtM paradigm to realize a nonce-based AE, CWC+. CWC+ is very close (requiring only a few more xor operations) to the CWC AE scheme (FSE-04) and it not only provides BBB security but also gracefully degrading security on nonce misuse.
In this paper, using Mixed Integer Linear Programming, a new automatic search tool for truncated differential characteristic is presented. While the previous MILP models for truncated differential characteristic has been used just as a facilitator for finding the maximal probability bit-wise differential characteristic, ours treats truncated differential characteristic as an independent distinguisher. Our method models the problem of finding a maximal probability truncated differential characteristic, being able to distinguish the cipher from a pseudo random permutation. Our model enjoys a word-wise variable definitions which makes it much simpler and more easily solvable than its bit-wise counterpart. Using this method, we analyse Midori64 and SKINNY64/64,128 block ciphers, for both of which the existing results are improved. In both cases, the truncated differential characteristic is much more efficient than the upper bound of (bit-wise) differential characteristic proven by the designers, for all number of rounds. More specifically, the highest possible rounds, for which a differential characteristic can exist for Midori64 and SKINNY64/64,128, are 6 and 7 rounds respectively, for which differential characteristics with maximum probabilities of $2^{-60}$ and $2^{-52}$ may exist. However, we present new truncated differential characteristics for 6-round of Midori64 with probability $2^{-54}$. In case of SKINNY64/64,128, the gap is much wider, where for 7 rounds we find a truncated characteristic with probability $2^{-4}$, and even a 10-round truncated characteristic can be found with probability $2^{-40}$. Moreover, our result outperforms the only truncated differential analysis that exists on Midori64. This method can be used as a new tool for differential analysis of SPN block ciphers.
12 February 2019
This paper provides proofs of the results of Laisant - Beaujeux: (1) If an integer of the form $n=4k+1$, $k>0$ is prime, then $\left(\begin{array}{c}n-1\\m\end{array}\right)\equiv1(mod\,n),m=\frac{n-1}{2}$, and (2) If an integer of the form $n=4k+3$, $k\geq0$ is prime, then $\left(\begin{array}{c}n-1\\m\end{array}\right)\equiv-1(mod\,n),m=\frac{n-1}{2}$. In addition, the author proposes important conjectures based on the converse of the above theorems which aim to establish primality of $n$. These conjectures are scrutinized by the given combinatorial primality test algorithm which can also distinguish patterns of prime $n$ whether it is of the form $4k+1$ or $4k+3$.
We observe that if a party breaks one cryptographic assumption, construction, or system, then it can reduce the trust in any other. This highlights a shortcoming in the common interpretation of the provable security paradigm that may lead to unwarranted trust. This may have practical implications.

Then we argue that the provable security paradigm remains sound in applications provided that assumptions are made with care. We also strengthen the argument for the study of combiners and constructions based on generic assumptions, and transparent standardization processes in applied cryptography.
ePrint Report Security of Multilinear Galois Mode (MGM) Liliya Akhmetzyanova, Evgeny Alekseev, Grigory Karpunin, Vladislav Nozdrunov
In this paper we analyze the new AEAD mode called the Multilinear Galois Mode (MGM) originally proposed in CTCrypt 2017. This mode is currently considered in the Russian Standardization system as the main contender to be adopted as a standard AEAD mode. The analysis of the MGM mode was carried out in the paradigm of provable security, in other words, lower security bounds were obtained for the Privacy and Authenticity notions. These bounds show that the privacy and authenticity of this mode is provably guaranteed (under security of the used block cipher) up to the birthday paradox bound.
Internet-of-Things (IoT) applications often require constrained devices to be deployed in the field for several years, even decades. Protection of these tiny motes is crucial for end-to-end IoT security. Secure boot and attestation techniques are critical requirements in such devices which rely on public key Sign/Verify operations. In a not-so-distant future, quantum computers are expected to break traditional public key Sign/Verify functions (e.g. RSA and ECC signatures). Hash Based Signatures (HBS) schemes, on the other hand, are promising quantum-resistant alternatives. Their security is based on the security of cryptographic hash function which is known to be secure against quantum computers. The XMSS signature scheme is a modern HBS construction with several advantages but it requires thousands of hash operations per Sign/Verify operation, which could be challenging in resource constrained IoT motes. In this work, we investigated the use of the XMSS scheme targeting IoT constrained. We propose a latency-area optimized XMSS Sign or Verify scheme with 128-bit post-quantum security. An appropriate HW-SW architecture has been designed and implemented in FPGA and Silicon where it spans out to 1521 ALMs and 13.5k gates respectively. In total, each XMSS Sign/Verify operation takes 4.8 million clock cycles in our proposed HW-SW hybrid design approach which is 5.35 times faster than its pure SW execution latency on a 32-bit microcontroller.
ePrint Report Anonymous Attestation for IoT Santosh Ghosh, Andrew H. Reinders, Rafael Misoczki, Manoj R. Sastry
Internet of Things (IoT) have seen tremendous growth and are being deployed pervasively in areas such as home, surveillance, health-care and transportation. These devices collect and process sensitive data with respect to user's privacy. Protecting the privacy of the user is an essential aspect of security, and anonymous attestation of IoT devices are critical to enable privacy-preserving mechanisms. Enhanced Privacy ID (EPID) is an industry-standard cryptographic scheme that offers anonymous attestation. It is based on group signature scheme constructed from bilinear pairings, and provides anonymity and sophisticated revocation capabilities (private-key based revocation and signature-based revocation). Despite the interesting privacy-preserving features, EPID operations are very computational and memory intensive. In this paper, we present a small footprint anonymous attestation solution based on EPID that can meet the stringent resource requirements of IoT devices. A specific modular-reduction technique targeting the EPID prime number has been developed resulting in 50% latency reduction compared to conventional reduction techniques. Furthermore, we developed a multi-exponentiation technique that significantly reduces the runtime memory requirements. Our proposed design can be implemented as SW-only, or it can utilize an integrated Elliptic Curve and Galois Field HW accelerator. The EPID SW stack has a small object code footprint of 22kB. We developed a prototype on a 32-bit microcontroller that computes EPID signature generation in 17.9s at 32MHz.
Song, Huang, Mu, and Wu proposed a new code-based signature scheme, the Rank Quasi-Cyclic Signature (RQCS) scheme (PKC 2019, Cryptology ePrint Archive 2019/053), which is based on RQC, an IND-CCA2 KEM scheme, proposed by Aguilar Melchor et al. (NIST PQC Standardization Round 1). Their scheme is an analogue to the Schnorr signature scheme.

In this short note, we investigate the security of the RQCS scheme. We report a key-recovery known-message attack by following the discussion in Aragon, Blazy, Gaborit, Hauteville, and Zémor (Cryptology ePrint Archive 2018/1192) and an experimental result. The key-recovery attack requires only one signature to retrieve a secret key and recovers a key less than 10 seconds.
The main result of this note is a severe flaw in the description of the zk-SNARK in [BCTV14]. The flaw stems from including redundant elements in the CRS, as compared to that of the original Pinocchio protocol [PHGR16], which are vital not to expose. The flaw enables creating a proof of knowledge for \emph{any} public input given a valid proof for \emph{some} public input. We also provide a proof of security for the [BCTV14] zk-SNARK in the generic group model, when these elements are excluded from the CRS, provided a certain linear algebraic condition is satisfied by the QAP polynomials.
The Walnut Digital Signature Algorithm (WalnutDSA) is a group-theoretic, public-key method that is part of the NIST Post-Quantum Cryptography standardization process. Prior to its submission to NIST, Hart et al published an attack that, when it produces a signature forgery, it is found to be orders of magnitude longer than a valid signature making it invalid due to its length. In addition to being identified as a forgery by our current method, we show that with a modest parameter-only increase we can block this attack to the desired security level without a significant impact on the performance while making WalnutDSA completely secure against this attack.
7 February 2019
Anonymous credential (AC) schemes are protocols which allow for authentication of authorized users without compromising their privacy. Of particular interest are non-interactive anonymous credential (NIAC) schemes, where the authentication process only requires the user to send a single message that still conceals its identity. Unfortunately, all known NIAC schemes in the standard model require pairing based cryptography, which limits them to a restricted set of specific assumptions and requires expensive pairing computations. The notion of keyed-verification anonymous credential (KVAC) was introduced in (Chase et al., CCS'14) as an alternative to standard anonymous credential schemes allowing for more efficient instantiations; yet, making existing KVAC non-interactive either requires pairing-based cryptography, or the Fiat-Shamir heuristic.

In this work, we construct the first non-interactive keyed-verification anonymous credential (NIKVAC) system in the standard model, without pairings. Our scheme is efficient, attribute-based, supports multi-show unlinkability, and anonymity revocation. We achieve this by building upon a combination of algebraic \MAC with the recent designated-verifier non-interactive zero-knowledge (DVNIZK) proof of knowledge of (Couteau and Chaidos, Eurocrypt'18). Toward our goal of building NIKVAC, we revisit the security analysis of a MAC scheme introduced in (Chase et al., CCS'14), strengthening its guarantees, and we introduce the notion of oblivious non-interactive zero-knowledge proof system, where the prover can generate non-interactive proofs for statements that he cannot check by himself, having only a part of the corresponding witness, and where the proof can be checked efficiently given the missing part of the witness. We provide an efficient construction of an oblivious DVNIZK, building upon the specific properties of the DVNIZK proof system of (Couteau and Chaidos, Eurocrypt'18).

  older items