International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

14 March 2022

Yuval Ishai, Alexis Korb, Paul Lou, Amit Sahai
ePrint Report ePrint Report
A wiretap coding scheme (Wyner, Bell Syst. Tech. J. 1975) enables Alice to reliably communicate a message m to an honest Bob by sending an encoding c over a noisy channel chB, while at the same time hiding m from Eve who receives c over another noisy channel chE.

Wiretap coding is clearly impossible when chB is a degraded version of chE, in the sense that the output of chB can be simulated using only the output of chE. A classic work of Csiszár and Körner (IEEE Trans. Inf. Theory, 1978) shows that the converse does not hold. This follows from their full characterization of the channel pairs (chB, chE) that enable information-theoretic wiretap coding. In this work, we show that in fact the converse does hold when considering computational security; that is, wiretap coding against a computationally bounded Eve is possible if and only if chB is not a degraded version of chE. Our construction assumes the existence of virtual black-box (VBB) obfuscation of specific classes of ``evasive'' functions that generalize fuzzy point functions, and can be heuristically instantiated using indistinguishability obfuscation. Finally, our solution has the appealing feature of being universal in the sense that Alice's algorithm depends only on chB and not on chE.
Expand
Lorenzo Grassi, Morten Øygarden, Markus Schofnegger, Roman Walch
ePrint Report ePrint Report
Ciminion is an MPC-friendly pseudo-random function (PRF) recently proposed at Eurocrypt’21. As in the case of other MPC-friendly constructions proposed in the literature (e.g., MiMC, HadesMiMC, Rescue), it aims to minimize the number of multiplications in large finite fields. While MiMC, HadesMiMC, and Rescue are block ciphers, Ciminion is a (modified) Farfalle-like cryptographic function. At the current state of the art, it achieves the best performance in MPC applications. However, Ciminion has a critical weakness. Its security highly relies on the independence of its subkeys, which is achieved by using an expensive key schedule. Many MPC use cases involving symmetric PRFs rely on secretly shared symmetric keys, and hence the expensive key schedule must also be computed in MPC. As a result, Ciminion’s performance is significantly reduced in these use cases. In this paper, we solve this problem. Following the approach introduced by Ciminion’s designers, we propose Megafono, a modified version of Farfalle designed for achieving a small multiplicative complexity without any key schedule. Following this strategy, we present the PRF Hydra, which utilizes both a Lai-Massey construction and a novel construction we name Amaryllises in its nonlinear layer. Amaryllises can be seen as a generalized variant of a Lai-Massey scheme, which allows us to further decrease the multiplicative complexity of Hydra. Based on an extensive security analysis, we implement Hydra in an MPC framework. The results show that it outperforms all MPC-friendly schemes currently published in the literature.
Expand
Nicoleta-Norica Băcuieți, Lejla Batina, Stjepan Picek
ePrint Report ePrint Report
At CRYPTO'19, A. Gohr proposed neural distinguishers for the lightweight block cipher Speck32/64, achieving better results than the state-of-the-art at that point. However, the motivation for using that particular architecture was not very clear, leading us to investigate whether a smaller and/or better performing neural distinguisher exists. This paper studies the depth-10 and depth-1 neural distinguishers proposed by Gohr with the aim of finding out whether smaller or better-performing distinguishers for Speck32/64 exist. We first evaluate whether we can find smaller neural networks that match the accuracy of the proposed distinguishers. We answer this question in affirmative with the depth-1 distinguisher successfully pruned, resulting in a network that remained within one percentage point of the unpruned network's performance. Having found a smaller network that achieves the same performance, we examine if its performance can be improved as well. We also study whether processing the input before giving it to the pruned depth-1 network would improve its performance. To this end, convolutional autoencoders were found that managed to reconstruct the ciphertext pairs successfully, and their trained encoders were used as a preprocessor before training the pruned depth-1 network. We found that, even though the autoencoders achieve a perfect reconstruction, the pruned network did not have the necessary complexity anymore to extract useful information from the preprocessed input, motivating us to look at the feature importance to get more insights. To achieve this, we used LIME, with results showing that a stronger explainer is needed to assess it correctly.
Expand
Azade Rezaeezade, Guilherme Perin, Stjepan Picek
ePrint Report ePrint Report
Profiling side-channel analysis allows evaluators to estimate the worst-case security of a target. When security evaluations relax the assumptions about the adversary's knowledge, profiling models may easily be sub-optimal due to the inability to extract the most informative points of interest from the side-channel measurements. When used for profiling attacks, deep neural networks can learn strong models without feature selection with the drawback of expensive hyperparameter tuning. Unfortunately, due to very large search spaces, one usually finds very different model behaviors, and a widespread situation is to face overfitting with typically poor generalization capacity.

Usually, overfitting or poor generalization would be mitigated by adding more measurements to the profiling phase to reduce estimation errors. This paper provides a detailed analysis of different deep learning model behaviors and shows that adding more profiling traces as a single solution does not necessarily help improve generalization. In fact, we recognize the main problem to be the sub-optimal selection of hyperparameters, which is then difficult to resolve by simply adding more measurements. Instead, we propose to use small hyperparameter tweaks or regularization as techniques to resolve the problem.
Expand
Igor Semaev
ePrint Report ePrint Report
Every public-key encryption/decryption algorithm where the set of possible plain-texts is identical to the set of possible cipher-texts may be converted into a digital signature algorithm. That is quite different in the lattice (code)-based public-key cryptography. The decryption algorithm on a random input produces a valid plain-text, that is a signature, with a negligible probability. That explains why it is so difficult to construct a new secure and efficient lattice-based digital signature system. Though several solutions are known and taking part in the NIST Post Quantum Standardisation Process there is still a need to construct digital signature algorithms based on new principles. In this work, a new and efficient digital signature algorithm is suggested. Its design is simple and transparent. Its security is based on the hardness of an approximate closest vector problem in the maximum norm for some q-ary lattices. The complexity parameters are comparable with those of the round 3 NIST digital signature candidates.
Expand
Koji Chida, Koki Hamada, Atsunori Ichikawa, Masanobu Kii, Junichi Tomida
ePrint Report ePrint Report
We propose secure two-party computation for a new functionality that we call Private Intersection-Weighted-Sum (PIW-Sum) and present an efficient semi-honestly secure protocol. In this computation, two parties own integer matrices $\mathbf{X}$ and $\mathbf{Y}$, respectively, where each row of the matrices is associated with an identifier. Let $I=(i_{1} ,..., i_{n})$ be the intersection of the identifier sets for the two parties. The goal is to compute the matrix $\widehat{\mathbf{Y}}^{\top}\widehat{\mathbf{X}}$ together with the cardinality $|I|$, where the $j$-th rows of $\widehat{\mathbf{X}}$ and $\widehat{\mathbf{Y}}$ are the rows of $\mathbf{X}$ and $\mathbf{Y}$ with identifier $i_{j}$, respectively. This functionality is a generalization of Private Intersection-Sum (PI-Sum) proposed by Ion et al.~(EuroS\&P'20) and has important real-world applications such as computing a cross tabulation after the equijoin of two tables owned by different parties.

Our protocol is built on a new variant of oblivious pseudorandom function (OPRF), and we construct the new variant of OPRF from the decisional Diffie-Hellman (DDH) assumption. We implement both our PIW-Sum protocol and the the most efficient PI-Sum protocol by Ion et al.~and compare their performance in the same environment. This shows that both communication cost and computational cost of our protocol are only about 2 times greater than those of the PI-Sum protocol in the case where $\mathbf{X}$ and $\mathbf{Y}$ are column vectors, i.e., the number of columns of $\mathbf{X}$ and $\mathbf{Y}$ is one.
Expand
Matthias J. Kannwischer, Peter Schwabe, Douglas Stebila, Thom Wiggers
ePrint Report ePrint Report
The NIST post-quantum cryptography (PQC) standardization project is probably the largest and most ambitious cryptography standardization effort to date, and as such it makes an excellent case study of cryptography standardization projects. It is expected that with the end of round 3 in early 2022, NIST will announce the first set of primitives to advance to standardization, so it seems like a good time to look back and see what lessons can be learned from this effort. In this paper, we take a look at one specific aspect of the NIST PQC project: software implementations. We observe that many implementations included as a mandatory part of the submission packages were of poor quality and ignored decades-old standard techniques from software engineering to guarantee a certain baseline quality level. As a consequence, it was not possible to readily use those implementations in experiments for post-quantum protocol migration and software optimization efforts without first spending a significant amount of time to clean up the submitted reference implementations.

We do not mean to criticize cryptographers who submitted proposals, including software implementations, to NIST PQC: after all, it cannot reasonably be expected from every cryptographer to also have expertise in software engineering. Instead, we suggest how standardization bodies like NIST can improve the software-submission process in future efforts to avoid such issues with submitted software. More specifically, we present PQClean, an extensive (continuous-integration) testing framework for PQC software, which now also contains "clean" implementations of the NIST round 3 candidate schemes. We argue that the availability of such a framework---either in an online continuous-integration setup, or just as an offline testing system---long before the submission deadline would have resulted in much better implementations included in NIST PQC submissions and overall would have saved the community and probably also NIST a lot of time and effort.
Expand
Brent Waters, David J. Wu
ePrint Report ePrint Report
A non-interactive batch argument for NP provides a way to amortize the cost of NP verification across multiple instances. They enable a prover to convince a verifier of multiple NP statements with communication much smaller than the total witness length and verification time much smaller than individually checking each instance.

In this work, we give the first construction of a non-interactive batch argument for NP from standard assumptions on groups with bilinear maps (specifically, from either the subgroup decision assumption in composite-order groups or from the $k$-Lin assumption in prime-order groups for any $k \ge 1$). Previously, batch arguments for NP were only known from LWE, or a combination of multiple assumptions, or from non-standard/non-falsifiable assumptions. Moreover, our work introduces a new direct approach for batch verification and avoids heavy tools like correlation-intractable hash functions or probabilistically-checkable proofs common to previous approaches.

As corollaries to our main construction, we also obtain the first publicly-verifiable non-interactive delegation scheme for RAM programs with a CRS of sublinear size (in the running time of the RAM program), as well as the first aggregate signature scheme (supporting bounded aggregation) from standard assumptions on bilinear maps.
Expand
Tuan-Hong Chua, Iftekhar Salam
ePrint Report ePrint Report
Cybersecurity has become one of the focuses of organisations. The number of cyberattacks keeps increasing as Internet usage continues to grow. An intrusion detection system (IDS) is an alarm system that helps to detect cyberattacks. As new types of cyberattacks continue to emerge, researchers focus on developing machine learning (ML)-based IDS to detect zero-day attacks. Researchers usually remove some or all attack samples from the training dataset and only include them in the testing dataset when evaluating the performance of an IDS on detecting zero-day attacks. Although this method may show the ability of an IDs to detect unknown attacks; however, it does not reflect the long-term performance of the IDS as it only shows the changes in the type of attacks. In this paper, we focus on evaluating the long-term performance of ML-based IDS. To achieve this goal, we propose evaluating the ML-based IDS using a dataset that is created later than the training dataset. The proposed method can better assess the long-term performance of an ML-based IDS, as the testing dataset reflects the changes in the type of attack and the changes in network infrastructure over time. We have implemented six of the most popular ML models that are used for IDS, including decision tree (DT), random forest (RF), support vector machine (SVM), naïve Bayes (NB), artificial neural network (ANN) and deep neural network (DNN). Our experiments using the CIC-IDS2017 and the CSE-CIC-IDS2018 datasets show that SVM and ANN are most resistant to overfitting. Besides that, our experiment results also show that DT and RF suffer the most from overfitting, although they perform well on the training dataset. On the other hand, our experiments using the LUFlow dataset have shown that all models can perform well when the difference between the training and testing datasets is small.
Expand
Dung Bui, Geoffroy Couteau
ePrint Report ePrint Report
Pseudorandom correlation generators (PCG) allow two parties to generate long correlated pseudorandom strings with minimal communication. Since secure computation applications typically benefit from such protocols, we explore the use of PCG to improve private set intersection (PSI) protocols. We obtain two main results.

In our first result, we construct a new highly optimized semi-honest PSI. Our protocol builds upon the protocol of (Kolesnikov et al., CCS 2016), and significantly improves it using multiple optimizations, including a new oblivious pseudorandom function (built from a PCG for the subfield-VOLE correlation), and a new technique to handle a generalized variant of Cuckoo hashing tailored to our setting. For sets with elements of size $\ell$ bits with $\ell \leq 70$, our protocol outperforms all known PSI protocols, by as much as $42\%$ when $\ell = 32$ and with $n = 2^{20}$ items (compared to the best known protocol of (Rindal and Schoppmann, Eurocrypt 2021), enhanced with recent improvements). For these parameters, the communication of our protocol is extremely small: only $129n$ bits of total communication.

In our second result, we use a PCG for a new correlation, called the subfield ring-OLE correlation. We construct a new protocol with attracting features: competitive communication with the state of the art, fully malicious security in the standard model (no random oracle or tailored assumptions on hash functions). To our knowledge, our protocol outperforms by a large margin all previous protocols in the standard model, and is competitive even with ROM-based protocols. Furthermore, our protocol leads to a batch non-interactive PSI, where (after a one-time short interaction) a client can broadcast a single compact encoding of its dataset, and compute its intersection with the datasets of multiple servers after receiving a single message from each server.
Expand
Dandan Yuan, Shujie Cui, Giovanni Russello
ePrint Report ePrint Report
Verifiable Dynamic Searchable Symmetric Encryption (VDSSE) enables users to securely outsource databases (document sets) to cloud servers and perform searches and updates. The verifiability property prevents users from accepting incorrect search results returned by a malicious server. However, we discover that the community currently only focuses on preventing malicious behavior from the server but ignores incorrect updates from the client, which are very likely to happen since there is no record on the client to check. Indeed most existing VDSSE schemes are not sufficient to tolerate incorrect updates from the client. For instance, deleting a nonexistent keyword-identifier pair can break their correctness and soundness.

In this paper, we demonstrate the vulnerabilities of a type of existing VDSSE schemes that fail them to ensure correctness and soundness properties on incorrect updates. We propose an efficient fault-tolerant solution that can consider any DSSE scheme as a black-box and make them into a fault-tolerant VDSSE in the malicious model. Forward privacy is an important property of DSSE that prevents the server from linking an update operation to previous search queries. Our approach can also make any forward secure DSSE scheme into a fault-tolerant VDSSE without breaking the forward security guarantee.

In this work, we take FAST [1] (TDSC 2020), a forward secure DSSE, as an example, implement a prototype of our solution, and evaluate its performance. Even when compared with the previous fastest forward private construction that does not support fault tolerance, the experiments show that our construction saves 9× client storage and has better search and update efficiency.
Expand
Vivian Fang, Lloyd Brown, William Lin, Wenting Zheng, Aurojit Panda, Raluca Ada Popa
ePrint Report ePrint Report
The last decade has seen an explosion in the number of new secure multi-party computation (MPC) protocols that enable collaborative computation on sensitive data. No single MPC protocol is optimal for all types of computation. As a result, researchers have created hybrid-protocol compilers that translate a program into a hybrid protocol that mixes different MPC protocols. Hybrid-protocol compilers crucially rely on accurate cost models, which are handwritten by the compilers' developers, to choose the correct schedule of protocols.

In this paper, we propose CostCO, the first automatic MPC cost modeling framework. CostCO develops a novel API to interface with a variety of MPC protocols, and leverages domain-specific properties of MPC in order to enable efficient and automatic cost-model generation for a wide range of MPC protocols. CostCO employs a two-phase experiment design to efficiently synthesize cost models of the MPC protocol’s runtime as well as its memory and network usage. We verify CostCO’s modeling accuracy for several full circuits, characterize the engineering effort required to port existing MPC protocols, and demonstrate how hybrid-protocol compilers can leverage CostCO’s cost models.
Expand
Akiko Inoue, Kazuhiko Minematsu
ePrint Report ePrint Report
Authenticated encryption (AE) is a symmetric-key encryption function that provides confidentiality and authenticity of a message. One of the evaluation criteria for AE is state size, which is memory size needed for encryption. State size is especially important when cryptosystem is implemented in constrained devices, while trivial reduction by using a small primitive is not generally acceptable as it leads to a degraded security. In these days, the state size of AE has been very actively studied and a number of small-state AE schemes have been proposed, but they are inherently serial. It would be a natural question if we come up with a parallelizable AE with a smaller state size than the state-of-the-art.

In this paper, we study the seminal OCB mode for parallelizable AE and propose a method to reduce its state size without losing the bit security of it. More precisely, while (the most small-state variant of) OCB has $3n$-bit state, by carefully treating the checksum that is halved, we can achieve $2.5n$-bit state, while keeping the $n/2$-bit security as original. We also propose an inverse-free variant of it based on OTR. While the original OTR has $4n$-bit state, ours has $3.5n$-bit state. To our knowledge these numbers are the smallest ones achieved by the blockcipher modes for parallel AE and inverse-free parallel AE.
Expand
Rachit Garg, Rishab Goyal, George Lu
ePrint Report ePrint Report
Functional Encryption (FE) is a powerful notion of encryption which enables computations and partial message recovery of encrypted data. In FE, each decryption key is associated with a function $f$ such that decryption recovers the function evaluation $f(m)$ from an encryption of $m$. Informally, security states that a user with access to function keys $sk_{f_1}, sk_{f_2}, \ldots$ (and so on) can only learn $f_1(m), f_2(m), \ldots$ (and so on) but nothing more about the message. The system is said to be $q$-bounded collusion resistant if the security holds as long as an adversary gets access to at most $q = q(\lambda)$ decryption keys. In the last decade, numerous works have proposed many FE constructions from a wide array of algebraic and general cryptographic assumptions, and proved their security in the bounded collusion model.

However, until very recently, all these works studied bounded collusion resistance in a ``static model", where the collusion bound $q$ was a global system parameter. While the static collusion model led to great research progress in the community, it has many major drawbacks. Very recently, Agrawal et al. (Crypto 2021) and Garg et al. (Eurocrypt 2022) independently introduced the dynamic model for bounded collusion resistance, where the collusion bound $q$ was a fluid parameter that was not globally set but only chosen by each encryptor. The dynamic collusion model enabled harnessing the many virtues of the static collusion model, while avoiding its various drawbacks.

In this work, we give a simple and generic approach to upgrade any scheme from the static collusion model to the dynamic collusion model. Our result captures all existing results in the dynamic model in the form of a single unified framework, and also gives new results as simple corollaries with a lot more potential in the future. An interesting artifact of our result is that it gives a generic way to match existing lower bounds in functional encryption.
Expand
Daniel R. L. Brown
ePrint Report ePrint Report
Lattice cryptography uses fixed primes. Kolmogorov’s descriptional complexity of the primes might interest the numerically curious.
Expand
Lennert Wouters, Benedikt Gierlichs, Bart Preneel
ePrint Report ePrint Report
We investigate the susceptibility of the Texas Instruments SimpleLink platform microcontrollers to non-invasive physical attacks. We extracted the ROM bootloader of these microcontrollers and then analysed it using static analysis augmented with information obtained through emulation. We demonstrate a voltage fault injection attack targeting the ROM bootloader that allows to enable debug access on a previously locked microcontroller within seconds. Information provided by Texas Instruments reveals that one of our voltage fault injection attacks abuses functionality that is left over from the integrated circuit manufacturing process. The demonstrated physical attack allows an adversary to extract the firmware (i.e. intellectual property) and to bypass secure boot. Additionally, we mount side-channel attacks and differential fault analysis attacks on the hardware AES co-processor. To demonstrate the practical applicability of these attacks we extract the firmware from a Tesla Model 3 key fob. This paper describes a case study covering Texas Instruments SimpleLink microcontrollers. Similar attack techniques can be, and have been, applied to microcontrollers from other manufacturers. The goal of our work is to document our analysis methodology and to ensure that system designers are aware of these vulnerabilities. They will then be able to take these into account during the product design phase. All identified vulnerabilities were responsibly disclosed.
Expand
Arthur Beckers, Lennert Wouters, Benedikt Gierlichs, Bart Preneel, Ingrid Verbauwhede
ePrint Report ePrint Report
We evaluate eight implementations of provable secure side-channel masking schemes that were published in top-tier academic venues such as Eurocrypt, Asiacrypt, CHES and SAC. Specifically, we evaluate the side-channel attack resistance of eight open-source and first-order side-channel protected AES-128 software implementations on the Cortex-M4 platform. Using a T-test based leakage assessment we demonstrate that all implementations produce first-order leakage with as little as 10,000 traces. Additionally, we demonstrate that all except for two Inner Product Masking based implementations are vulnerable to a straightforward correlation power analysis attack. We provide an assembly level analysis showing potential sources of leakage for two implementations. Some of the studied implementations were provided for benchmarking purposes. We demonstrate several flaws in the benchmarking procedures and question the usefulness of the reported performance numbers in the face of the implementations’ poor side-channel resistance. This work serves as a reminder that practical evaluations cannot be omitted in the context of side-channel analysis.
Expand
Pierre Civit, Maria Potop-Butucaru
ePrint Report ePrint Report
This work extends the composable secure-emulation of Canetti et al. to dynamic settings. Our work builds on top of dynamic probabilistic I/O automata, a recent framework introduced to model dynamic probabilistic systems. Our extension is an important tool towards the formal verification of protocols combining probabilistic distributed systems and cryptography in dynamic settings (e.g. blockchains, secure distributed computation, cybersecure distributed protocols etc).
Expand
Michail Moraitis, Elena Dubrova
ePrint Report ePrint Report
Hardware obfuscation by redundancy addition is a well-known countermeasure against reverse engineering. For FPGA designs, such a technique can be implemented with a small overhead, however, its effectiveness is heavily dependent on the stealthiness of the redundant elements. Since there are powerful tools for combinational redundancy removal, opting for sequential redundancy is believed to result in stronger obfuscation. However, in this paper, we demonstrate that it is possible to identify sequential redundancy in obfuscated SRAM FPGA designs by ensuring the full controllability of each instantiated look-up table input via iterative bitstream modification. The presented algorithm works directly on bitstream and does not require the possession of a flattened netlist. The feasibility of our approach is verified on the example of an obfuscated SNOW 3G design implemented in a Xilinx 7-series FPGA.
Expand

13 March 2022

Karlsruhe, Deutschland, 29 September - 30 September 2022
Event Calendar Event Calendar
Event date: 29 September to 30 September 2022
Submission deadline: 10 June 2022
Notification: 12 August 2022
Expand
◄ Previous Next ►