International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

27 May 2024

Arnaud Sipasseuth
ePrint Report ePrint Report
In this paper, we introduce a new probability function parameter in the instantiations of the Goldreich-Micali-Wigderson with Fiat-Shamir and unbalanced challenges used in ALTEQ, a recent NIST PQC candidate in the call for additional signatures. This probability set at 100% does not bring any changes in the scheme, but modifies the public challenge generation process when below 100%, by injecting potential rejections in otherwise completely valid inputs. From a theoretical point of view, this does not improve the asymptotical hardness of the scheme and negatively affects the efficiency of the signatory, and might itself seem trivial. However, from a practical point of view, implementation-wise and performance-wise, this triviality allows an extra degree of freedom in optimizing parameters, as the heuristic security level is also increased against forgers: previously valid combinations now can be deemed invalid. This allows us to make trade-offs to reduce the computational load in verifiers, accelerating verifications, marginally reduce the signature size, at the cost of making signatures slower and unlikely to be constant-time. In particular, this extra degree of freedom allows to make implementation choices that enable smoother and faster executions of the aforementioned protocols, especially in the context of parallelization using vectorized instructions. We also demonstrate the usefulness of our proposal to ALTEQ for other options, when slowing down the signing process is not an issue: significantly smaller signatures but longer verifications, or lower public key sizes. The ideas presented apply to any primitive, and can be used beyond ALTEQ.
Expand
Damiano Abram, Lawrence Roy, Peter Scholl
ePrint Report ePrint Report
This work introduces homomorphic secret sharing (HSS) with succinct share size. In HSS, private inputs are shared between parties, who can then homomorphically evaluate a function on their shares, obtaining a share of the function output. In succinct HSS, a portion of the inputs can be distributed using shares whose size is sublinear in the number of such inputs. The parties can then locally evaluate a function $f$ on the shares, with the restriction that $f$ must be linear in the succinctly shared inputs.

We construct succinct, two-party HSS for branching programs, based on either the decisional composite residuosity assumption, a DDH-like assumption in class groups, or learning with errors with a superpolynomial modulus-to-noise ratio. We then give several applications of succinct HSS, which were only previously known using fully homomorphic encryption, or stronger tools:

- Succinct vector oblivious linear evaluation (VOLE): Two parties can obtain secret shares of a long, arbitrary vector $\vec x$, multiplied by a scalar $\Delta$, with communication sublinear in the size of the vector. - Batch, multi-party distributed point functions: A protocol for distributing a batch of secret, random point functions among $N$ parties, for any polynomial $N$, with communication sublinear in the number of DPFs. - Sublinear MPC for any number of parties: Two new constructions of MPC with sublinear communication complexity, with $N$ parties for any polynomial $N$: (1) For general layered Boolean circuits of size $s$, with communication $O(N s/\log\log s)$, and (2) For layered, sufficiently wide Boolean circuits, with communication $O(N s/\log s)$.
Expand
Mehmet Sabir Kiraz, Enrique Larraia, Owen Vaughan
ePrint Report ePrint Report
We explain how to extend the Bitcoin backbone model of Garay et al. (Eurocrypt, 2015) to accommodate for redactable blockchains. Our extension captures fluid blockchain-based databases (with mutability requirements) and compliance with existing legislation, such as the GDPR right to be forgotten, or the need to erase offending data from nodes’ databases that would otherwise provoke legal shutdowns. Our redactable backbone protocol retains the essential properties of blockchains. Leveraging zero-knowledge proofs, old data can be erased without requiring trusted third parties or heuristics about past chain validation. Our solution can be implemented on Bitcoin immediately without hard-forks, and it is scalable. It allows the redaction of data from UTXOs or unconfirmed transactions that have not yet flooded the network, while guaranteeing invariance of the Bitcoin state. Thus, offending data does not need to persist in the system, not even temporarily.
Expand
Sébastien Canard, Caroline Fontaine, Duong Hieu Phan, David Pointcheval, Marc Renard, Renaud Sirdey
ePrint Report ePrint Report
In a recent Eurocrypt'24 paper, Manulis and Nguyen have proposed a new CCA security notion, vCCA, and associated construction blueprints to leverage both CPA-secure and correct FHE beyond the CCA1 security barrier. However, because their approach is only valid under the correctness assumption, it leaves a large part of the FHE spectrum uncovered as many FHE schemes used in practice turn out to be approximate and, as such, do not satisfy the correctness assumption. In this paper, we improve their work by defining and investigating a variant of their security notion which is suitable for a more general case where approximate FHE are included. As the passive security of approximate FHE schemes is more appropriately captured by CPAD rather than CPA security, we start from the former notion to define our vCCAD new security notion. Although, we show that vCCA and vCCAD are equivalent when the correctness assumption holds, we establish that vCCAD security is strictly stronger than vCCA security in the general case. In doing so, we interestingly establish several new separation results between variants of CPAD security of increasing strength. This allows us to clarify the relationship between vCCA security and CPAD security, and to reveal that the security notions landscape is much simpler for exact FHE than when approximate ones are included --- in which case, for example, we establish that multiple challenges security notions are strictly stronger than single-challenge ones for both CPAD and vCCAD security. Lastly, we also give concrete construction blueprints, showing how to leverage some of the blueprints proposed by Manulis and Nguyen to achieve vCCAD security. As a result, vCCAD security is the strongest CCA security notion so far known to be achievable by both correct and approximate FHE schemes.
Expand
Charlotte Hoffmann
ePrint Report ePrint Report
Traceable threshold secret sharing schemes, introduced by Goyal, Song and Srinivasan (CRYPTO'21), allow to provably trace leaked shares to the parties that leaked them. The authors give the first definition and construction of traceable secret sharing schemes. However, the size of the shares in their construction are quadratic in the size of the secret. Boneh, Partap and Rotem (CRYPTO'24) recently proposed a new definition of traceable secret sharing and the first practical constructions. In their definition, one considers a reconstruction box $R$ that contains $f$ leaked shares and, on input $t-f$ additional shares, outputs the secret $s$. A scheme is traceable if one can find out the leaked shares inside the box $R$ by only getting black-box access to $R$. Boneh, Partap and Rotem give constructions from Shamir's secret sharing and Blakely's secret sharing. The constructions are efficient as the size of the secret shares is only twice the size of the secret.

In this work we present the first traceable secret sharing scheme based on the Chinese remainder theorem. This was stated as an open problem by Boneh, Partap and Rotem, as it gives rise to traceable secret sharing with weighted threshold access structures. The scheme is based on Mignotte's secret sharing and increases the size of the shares of the standard Mignotte secret sharing scheme by a factor of $2$.
Expand
Qian Guo, Erik Mårtensson, Adrian Åström
ePrint Report ePrint Report
In this paper, we study the robustness of Kyber, the Learning With Errors (LWE)-based Key Encapsulation Mechanism (KEM) chosen for standardization by NIST, against key mismatch attacks. We demonstrate that Kyber's security levels can be compromised with a few mismatch queries by striking a balance between the parallelization level and the cost of lattice reduction for post-processing. This highlights the imperative need to strictly prohibit key reuse in CPA-secure Kyber.

We further propose an adaptive method to enhance parallel mismatch attacks, initially proposed by Shao et al. at AsiaCCS 2024, thereby significantly reducing query complexity. This method combines the adaptive attack with post-processing via lattice reduction to retrieve the final secret key entries. Our method proves its efficacy by reducing query complexity by 14.6 % for Kyber512 and 7.5 % for Kyber768/Kyber1024.

Furthermore, this approach has the potential to substantially improve multi-value Plaintext-Checking (PC) oracle-based side-channel attacks against the CCA-secure version of Kyber KEM.
Expand
Jana Berušková, Martin Jureček, Olha Jurečková
ePrint Report ePrint Report
This paper deals with reducing the secret key computation time of small scale variants of the AES cipher using algebraic cryptanalysis, which is accelerated by data mining methods. This work is based on the known plaintext attack and aims to speed up the calculation of the secret key by processing the polynomial equations extracted from plaintext-ciphertext pairs. Specifically, we propose to transform the overdefined system of polynomial equations over GF(2) into a new system so that the computation of the Gröbner basis using the F4 algorithm takes less time than in the case of the original system. The main idea is to group similar polynomials into clusters, and for each cluster, sum the two most similar polynomials, resulting in simpler polynomials. We compare different data mining techniques for finding similar polynomials, such as clustering or locality-sensitive hashing (LSH). Experimental results show that using the LSH technique, we get a system of equations for which we can calculate the Gröbner basis the fastest compared to the other methods that we consider in this work. Experimental results also show that the time to calculate the Gröbner basis for the transformed system of equations is significantly reduced compared to the case when the Gröbner basis was calculated from the original non-transformed system. This paper demonstrates that reducing an overdefined system of equations reduces the computation time for finding a secret key.
Expand
Yacov Manevich, Hagar Meir, Kaoutar Elkhiyaoui, Yoav Tock, May Buzaglo
ePrint Report ePrint Report
Arma is a Byzantine Fault Tolerant (BFT) consensus system designed to achieve horizontal scalability across all hardware resources: network bandwidth, CPU, and disk I/O. As opposed to preceding BFT protocols, Arma separates the dissemination and validation of client transactions from the consensus process, restricting the latter to totally ordering only metadata of batches of transactions. This separation enables each party to distribute compute and storage resources for transaction validation, dissemination and disk I/O among multiple machines, resulting in horizontal scalability. Additionally, Arma ensures censorship resistance by imposing a maximum time limit on the inclusion of client transactions. We built and evaluated two Arma prototypes. The first is an independent system handling over 200,000 transactions per second, the second integrated into Hyperledger Fabric, speeding its consensus by an order of magnitude.
Expand
Julian Loss, Kecheng Shi, Gilad Stern
ePrint Report ePrint Report
Understanding the fault tolerance of Byzantine Agreement protocols is an important question in distributed computing. While the setting of Byzantine faults has been thoroughly explored in the literature, the (arguably more realistic) omission fault setting is far less studied. In this paper, we revisit the recent work of Loss and Stern who gave the first protocol in the mixed fault model tolerating $t$ Byzantine faults, $s$ send faults, and $r$ receive faults, when $2t+r+s2$, or a broadcast protocol for $s+r=n$ and $s>1$ even without overlapping faults.
Expand
Susumu Kiyoshima
ePrint Report ePrint Report
Resettable statistical zero-knowledge [Garg--Ostrovsky--Visconti--Wadia, TCC 2012] is a strong privacy notion that guarantees statistical zero-knowledge even when the prover uses the same randomness in multiple proofs.

In this paper, we show an equivalence of resettable statistical zero-knowledge arguments for $NP$ and witness encryption schemes for $NP$. - Positive result: For any $NP$ language $L$, a resettable statistical zero-knowledge argument for $L$ can be constructed from a witness encryption scheme for $L$ under the assumption of the existence of one-way functions. - Negative result: The existence of even resettable statistical witness-indistinguishable arguments for $NP$ imply the existence of witness encryption schemes for $NP$ under the assumption of the existence of one-way functions. The positive result is obtained by naturally extending existing techniques (and is likely to be already well-known among experts). The negative result is our main technical contribution.

To explore workarounds for the negative result, we also consider resettable security in a model where the honest party's randomness is only reused with fixed inputs. We show that resettable statistically hiding commitment schemes are impossible even in this model.
Expand
Ali Raya, Vikas Kumar, Sugata Gangopadhyay
ePrint Report ePrint Report
NTRU-like cryptosystems are among the most studied lattice-based post-quantum candidates. While most NTRU proposals have been introduced over a commutative ring of quotient polynomials, other rings can be used. Noncommutative algebra has been endorsed as a direction to build new variants of NTRU a long time ago. The first attempt to construct a noncommutative variant was due to Hoffstein and Silverman motivated by more resistance to lattice attack. The scheme has been built over the group ring of a dihedral group. However, their design differed from standard NTRU and soon was found vulnerable to algebraic attacks. In this work, we revive the group ring NTRU over the dihedral group as an instance of the GR-NTRU framework. Unlike many proposals of noncommutative variants in the literature, our work focuses on putting the scheme into practice. We clear all the aspects that make our scheme implementable by proposing an efficient inversion algorithm over the new setting of the noncommutative ring, describing the decryption failure model, and analyzing the lattice associated with our instantiation. Finally, we discuss the best-known attacks against our scheme and provide an implementation targeting 128-bit, 192-bit, and 256-bit levels of security as proof of its practicality.
Expand
Taechan Kim
ePrint Report ePrint Report
Recent improvements to garbled circuits are mainly focused on reducing their size. The state-of-the-art construction of Rosulek and Roy~(Crypto~2021) requires $1.5\kappa$ bits for garbling AND gates in the free-XOR setting. This is below the previously proven lower bound $2\kappa$ in the linear garbling model of Zahur, Rosulek, and Evans~(Eurocrypt~2015).

Recently, Ashur, Hazay, and Satish~(eprint 2024/389) proposed a scheme that requires $4/3\kappa + O(1)$ bits for garbling AND gates. Precisely they extended the idea of \emph{slicing} introduced by Rosulek and Roy to garble 3-input gates of the form $g(u,v,w) := u(v+w)$. By setting $w = 0$, it can be used to garble AND gates with the improved communication costs.

However, in this paper, we observe that the scheme proposed by Ashur, Hazy, and Satish leaks information on the permute bits, thereby allowing the evaluator to reveal information on the private inputs. To be precise, we show that in their garbling scheme, the evaluator can compute the bits $\alpha$ and $\beta + \gamma$, where $\alpha$, $\beta$, and $\gamma$ are the private permute bits of the input labels $A$, $B$, and $C$, respectively.
Expand

24 May 2024

IBM Research Europe, Zurich
Job Posting Job Posting
The Cryptography group at IBM Research in Zurich is looking to hire a Ph.D. student to work on constructions and applications of lattice-based zero knowledge proofs. The group is one of the world-leaders in quantum-safe cryptography research and has significantly contributed to all three lattice-based NIST quantum-safe standards.

Our current research emphasis is on practical zero-knowledge proofs based on the same foundations and their application to privacy-based cryptography, where the results look very promising. A motivated researcher should find this to be a very exciting and fast-moving environment to work in.

The ideal candidate is someone who has strong industrial development experience (preferably, but not necessarily, working on some aspect of zero- knowledge proofs or lattice cryptography) and would like to get into the research field by gaining an in-depth understanding of lattice-based cryptography and apply it to designing and deploying practical privacy-based protocols. Strong mathematical foundations and some experience with cryptography is desirable.

Zurich is consistently ranked as one of the top cities for living standards and the immediate proximity of lakes and mountains to the lab allows for the pursuit of numerous hobbies.

Closing date for applications:

Contact: Please send a CV and a short description about your interests to Vadim Lyubashevsky (vad@zurich.ibm.com) and Gregor Seiler (grs@zurich.ibm.com)

Expand
University of New South Wales
Job Posting Job Posting
Remote and embedded devices are the lynchpin of modern networks. Satellites, Aircraft, Remote Sensors and Drones all require numerous embedded devices to function. A key part of ensuring these devices remain ready to carry out operations is to ensure their memory has not been corrupted by an adversary. In this project we will explore methods for securing remote devices using early generation quantum computers. These have the ability to work with one or two qubits at a time, and operate with very limited quantum memory, but they still provide access to valuable quantum effects which can be used for security. The successful student will have an interest in both cyber-security and quantum computing, with a willingness to explore the mathematics needed to exploit quantum algorithms. The student needs to be a domestic Australian, but could be based in either Sydney or Canberra.

Closing date for applications:

Contact: Jesse Laeuchli(j.laeuchli at unsw.edu.au)

More information: https://www.unsw.edu.au/research/hdr/our-projects/quantum-methods-for-securing-remote-devices

Expand
Chunghun Baek, Taechan Kim
ePrint Report ePrint Report
Recent improvements to garbled circuits are mainly focused on reducing their size. The state-of-the-art construction of Rosulek and Roy (Crypto 2021) requires $1.5\kappa$ bits for garbling AND gates in the free-XOR setting. This is below the previously proven lower bound $2\kappa$ in the linear garbling model of Zahur, Rosulek, and Evans (Eurocrypt 2015). Whether their construction is optimal in a more inclusive model than the linear garbling model still remains open.

This paper begins by providing a comprehensive model for a large class of practical garbling schemes and proves the lower bound for the size of the garbled AND gates in our model. We show that garbled AND gates require at least $1.5\kappa$ bits in our new model with the free-XOR setting.

It is remarkable to see that the construction by Rosulek and Roy is already optimal despite the fact that our model possibly captures any potential extension of their construction.
Expand
Nicolas T. Courtois, Frédéric Amiel, Alexandre Bonnard de Fonvillars
ePrint Report ePrint Report
In this paper we study the S-box known as Xi initially proposed by Daemen in 1995 and very widely used ever since in Keccak, Ascon, and many other. This type of ciphers is typically analyzed [in recent research] in terms of subspace trail attacks [TeDi19] and vector space invariants. An interesting question is then, when different spaces are mapped to each other by translations with a constant. In this paper we relax this fundamental question and we consider arbitrary sets of points and their translations. We generalize previous S-box partial linearization results on Keccak and Ascon from Eurocrypt 2017. We basically introduce new ways to linearize the Ascon S-box to the maximum possible extent. On this basis we show further remarkable properties and some surprising connections between [simultaneous] linear and [prominent] differential properties. We exhibit a new family of maximum size and optimal approximation properties with 11 points, beyond the maximum size of any set in the DDT table. We prove a theorem which guarantees that this type of properties are stable by arbitrary input-side translations which holds for all quadratic permutations. All this will be placed in the context of previous work on classification of 5-bit quadratic permutations.
Expand
Björn Kriepke, Gohar Kyureghyan
ePrint Report ePrint Report
We consider the map $\chi:\mathbb{F}_2^n\to\mathbb{F}_2^n$ for $n$ odd given by $y=\chi(x)$ with $y_i=x_i+x_{i+2}(1+x_{i+1})$, where the indices are computed modulo $n$. We suggest a generalization of the map $\chi$ which we call generalized $\chi$-maps. We show that these maps form an Abelian group which is isomorphic to the group of units in $\mathbb{F}_2[X]/(X^{(n+1)/2})$. Using this isomorphism we easily obtain closed-form expressions for iterates of $\chi$ and explain their properties.
Expand
Yanyi Liu, Noam Mazor, Rafael Pass
ePrint Report ePrint Report
We present a simple alternative exposition of the the recent result of Hirahara and Nanashima (STOC’24) showing that one-way functions exist if (1) every language in NP has a zero-knowledge proof/argument and (2) ZKA contains non-trivial languages. Our presentation does not rely on meta-complexity and we hope it may be useful for didactic purposes.
Expand
Joseph Jaeger, Akshaya Kumar, Igors Stepanovs
ePrint Report ePrint Report
We introduce a new cryptographic primitive called symmetric signcryption, which differs from traditional signcryption because the sender and recipient share a secret key. We prove that a natural composition of symmetric encryption and signatures achieves strong notions of security against attackers that can learn and control many keys. We then identify that the core encryption algorithm of the Keybase encrypted messaging protocol can be modeled as a symmetric signcryption scheme. We prove the security of this algorithm, though our proof requires assuming non-standard, brittle security properties of the underlying primitives.
Expand
Rishab Goyal, Venkata Koppula, Mahesh Sreekumar Rajasree, Aman Verma
ePrint Report ePrint Report
Incompressible encryption (Dziembowski, Crypto'06; Guan, Wichs, Zhandry, Eurocrypt'22) protects from attackers that learn the entire decryption key, but cannot store the full ciphertext. In incompressible encryption, the attacker must try to compress a ciphertext within pre-specified memory bound $S$ before receiving the secret key.

In this work, we generalize the notion of incompressibility to functional encryption. In incompressible functional encryption, the adversary can corrupt non-distinguishing keys at any point, but receives the distinguishing keys only after compressing the ciphertext to within $S$ bits. An important efficiency measure for incompressible encryption is the ciphertext-rate ( i.e., $\mathsf{rate} = \frac{|m|}{|\mathsf{ct}|}\;$). We give many new results for incompressible functional encryption:

1. Incompressible attribute-based encryption for circuits from standard assumptions, with $\mathsf{ct}$-rate-$\frac{1}{2}$ and short secret keys, 2. Incompressible functional encryption for circuits from (non-incompressible) functional encryption, with (a) $\mathsf{ct}$-rate-$\frac{1}{2}$ and short secret keys, (b) $\mathsf{ct}$-rate-$1$ and large secret keys.

Our results achieve optimal efficiency, as incompressible attribute-based/functional encryption with $\mathsf{ct}$-rate-$1$ as well as short secret keys has strong implausibility barriers. Moreover, our assumptions are minimal as incompressible attribute-based/functional encryption are strictly stronger than their non-incompressible counterparts.
Expand
◄ Previous Next ►