International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

29 June 2023

Pierre Briaud, Pierre Loidreau
ePrint Report ePrint Report
In this work, we introduce a new attack for the Loidreau scheme [PQCrypto 2017] and its more recent variant LowMS. This attack is based on a constrained linear system for which we provide two solving approaches: - The first one is an enumeration algorithm inspired from combinatorial attacks on the Rank Decoding (RD) Problem. While the attack technique remains very simple, it allows us to obtain the best known structural attack on the parameters of these two schemes. - The second one is to rewrite it as a bilinear system over Fq. Even if Gröbner basis techniques on this second system seem infeasible, we provide a detailed analysis of the first degree fall polynomials which arise when applying such algorithms.
Expand
Estuardo Alpirez Bock, Chris Brzuska, Russell W. F. Lai
ePrint Report ePrint Report
Incompressibility is a popular security notion for white-box cryptography and captures that a large encryption program cannot be compressed without losing functionality. Fouque, Karpman, Kirchner and Minaud (FKKM) defined strong incompressibility, where a compressed program should not even help to distinguish encryptions of two messages of equal length. Equivalently, the notion can be phrased as indistinguishability under chosen-plaintext attacks and key-leakage (LK-IND-CPA), where the leakage rate is high.

In this paper, we show that LK-IND-CPA security with superlogarithmic-length leakage, and thus strong incompressibility, cannot be proven under standard (i.e. single-stage) assumptions, if the encryption scheme is key-fixing, i.e. a polynomial number of message-ciphertext pairs uniquely determine the key with high probability.

Our impossibility result refutes a claim by FKKM that their big-key generation mechanism achieves strong incompressibility when combined with any PRG or any conventional encryption scheme, since the claim is not true for encryption schemes which are key-fixing (or for PRGs which are injective). In particular, we prove that the cipher block chaining (CBC) block cipher mode is key-fixing when modelling the cipher as a truly random permutation for each key. Subsequent to and inspired by our work, FKKM prove that their original big-key generation mechanism can be combined with a random oracle into an LK-IND-CPA-secure encryption scheme, circumventing the impossibility result by the use of an idealised model.

Along the way, our work also helps clarifying the relations between incompressible white-box cryptography, big-key symmetric encryption, and general leakage resilient cryptography, and their limitations.
Expand
Vipul Goyal, Akshayaram Srinivasan, Mingyuan Wang
ePrint Report ePrint Report
Consider the standard setting of two-party computation where the sender has a secret function $f$ and the receiver has a secret input $x$ and the output $f(x)$ is delivered to the receiver at the end of the protocol. Let us consider the unidirectional message model where only one party speaks in each round. In this setting, Katz and Ostrovsky (Crypto 2004) showed that at least four rounds of interaction between the parties are needed in the plain model (i.e., no trusted setup) if the simulator uses the adversary in a black-box way (a.k.a. black-box simulation). Suppose the sender and the receiver would like to run multiple sequential iterations of the secure computation protocol on possibly different inputs. For each of these iterations, do the parties need to start the protocol from scratch and exchange four messages?

In this work, we explore the possibility of \textit{amortizing} the round complexity or in other words, \textit{reusing} a certain number of rounds of the secure computation protocol in the plain model. We obtain the following results.

1. Under standard cryptographic assumptions, we construct a four-round two-party computation protocol where (i) the first three rounds of the protocol could be reused an unbounded number of times if the receiver input remains the same and only the sender input changes, and (ii) the first two rounds of the protocol could be reused an unbounded number of times if the receiver input needs to change as well. In other words, the sender sends a single additional message if only its input changes, and in the other case, we need one message each from the receiver and the sender. The number of additional messages needed in each of the above two modes is optimal and, additionally, our protocol allows arbitrary interleaving of these two modes. 2. We also extend these results to the multiparty setting (in the simultaneous message exchange model) and give round-optimal protocols such that (i) the first two rounds could be reused an unbounded number of times if the inputs of the parties need to change and (ii) the first three rounds could be reused an unbounded number of times if the inputs remain the same but the functionality to be computed changes. As in the two-party setting, we allow arbitrary interleaving of the above two modes of operation.
Expand
Yuting Zuo, Li Xu, Yuexin Zhang, Chenbin Zhao, Zhaozhe Kang
ePrint Report ePrint Report
Vehicular Social Networks (VSNs) rely on data shared by users to provide convenient services. Data is outsourced to the cloud server and the distributed roadside unit in VSNs. However, roadside unit has limited resources, so that data sharing process is inefficient and is vulnerable to security threats, such as illegal access, tampering attack and collusion attack. In this article, to overcome the shortcomings of security, we define a chain tolerance semi-trusted model to describe the credibility of distributed group based on the anti tampering feature of blockchain. We further propose a Blockchain-based Lightweight Access Control scheme in VSNs that resist tampering and collusion attacks, called BLAC. To overcome the shortcomings of efficiency, we design a ciphertext piece storage algorithm and a recovery one to achieve lightweight storage cost. In the decryption operation, we separate a pre-decryption algorithm based on outsourcing to achieve lightweight decryption computation cost on the user side. Finally, we present the formal security analyses and the simulation experiments for BLAC, and compare the results of experiments with existing relevant schemes. The security analyses show that our scheme is secure, and the results of experiments show that our scheme is lightweight and practical.
Expand
Willow Barkan-Vered, Franklin Harding, Jonathan Keller, Jiayu Xu
ePrint Report ePrint Report
ECVRF is a verifiable random function (VRF) scheme used in multiple cryptocurrency systems. It has recently been proven to satisfy the notion of non-malleability which is useful in applications to blockchains (Peikert and Xu, CT-RSA 2023); however, the existing proof uses the rewinding technique and has a quadratic security loss. In this work, we re-analyze the non-malleability of ECVRF in the algebraic group model (AGM) and give a tight proof. We also compare our proof with the unforgeability proof for the Schnorr signature scheme in the AGM (Fuchsbauer, Plouviez and Seurin, EUROCRYPT 2020).
Expand
Ran Cohen, Pouyan Forghani, Juan Garay, Rutvik Patel, Vassilis Zikas
ePrint Report ePrint Report
It is well known that without randomization, Byzantine agreement (BA) requires a linear number of rounds in the synchronous setting, while it is flat out impossible in the asynchronous setting. The primitive which allows to bypass the above limitation is known as oblivious common coin (OCC). It allows parties to agree with constant probability on a random coin, where agreement is oblivious, i.e., players are not aware whether or not agreement has been achieved.

The starting point of our work is the observation that no known protocol exists for information-theoretic multi-valued OCC---i.e., OCC where the coin might take a value from a domain of cardinality larger than 2---with optimal resiliency in the asynchronous (with eventual message delivery) setting. This apparent hole in the literature is particularly problematic, as multi-valued OCC is implicitly or explicitly used in several constructions. (In fact, it is often falsely attributed to the asynchronous BA result by Canetti and Rabin [STOC ’93], which, however, only achieves binary OCC and does not translate to a multi-valued OCC protocol.)

In this paper, we present the first information-theoretic multi-valued OCC protocol in the asynchronous setting with optimal resiliency, i.e., tolerating $t
We then turn to the problem of round-preserving parallel composition of asynchronous BA. A protocol for this task was proposed by Ben-Or and El-Yaniv [Distributed Computing ’03]. Their construction, however, is flawed in several ways: For starters, it relies on multi-valued OCC instantiated by Canetti and Rabin's result (which, as mentioned above, only provides binary OCC). This shortcoming can be repaired by plugging in our above multi-valued OCC construction. However, as we show, even with this fix it remains unclear whether the protocol of Ben-Or and El-Yaniv achieves its goal of expected-constant-round parallel asynchronous BA, as the proof is incorrect. Thus, as a second contribution, we provide a simpler, more modular protocol for the above task. Finally, and as a contribution of independent interest, we provide proofs in Canetti's Universal Composability framework; this makes our work the first one offering composability guarantees, which are important as BA is a core building block of secure multi-party computation protocols.
Expand
Shuaishuai Li, Cong Zhang, Dongdai Lin
ePrint Report ePrint Report
The relationship between oblivious transfer (OT) and public-key encryption (PKE) has been studied by Gertner et al. (FOCS 2000). They showed that OT can be constructed from special types of PKE, i.e., PKE with oblivious sampleability of public keys or ciphertexts. In this work, we give new black-box constructions of OT from PKE without any oblivious sampleability. Instead, we require that the PKE scheme is rerandomizable, meaning that one can use the public key to rerandomize a ciphertext into a fresh ciphertext. We give two different OT protocols with different efficiency features based on rerandomizable PKE. For $1$-out-of-$n$ OT, in our first OT protocol, the sender has sublinear (in $n$) cost, and in our second OT protocol, the cost of the receiver is independent of $n$. As a comparison, in the PKE-based OT protocols of Gertner et al., both the sender and receiver have linear cost.
Expand
Foteini Baldimtsi, Ioanna Karantaidou, Srinivasan Raghuraman
ePrint Report ePrint Report
A cryptographic accumulator is a succinct set commitment scheme with efficient (non-)membership proofs that typically supports updates (additions and deletions) on the accumulated set. When elements are added to or deleted from the set, an update message is issued. The collection of all the update messages essentially leaks the underlying accumulated set which in certain applications is not desirable.

In this work, we define oblivious accumulators, a set commitment with concise membership proofs that hides the elements and the set size from every entity: an outsider, a verifier or other element holders. We formalize this notion of privacy via two properties: element hiding and add-delete indistinguishability. We also define almost-oblivious accumulators, that only achieve a weaker notion of privacy called add-delete unlinkability. Such accumulators hide the elements but not the set size. We consider the trapdoorless, decentralized setting where different users can add and delete elements from the accumulator and compute membership proofs.

We then give a generic construction of an oblivious accumulator based on key-value commitments (KVC). We also show a generic way to construct KVCs from an accumulator and a vector commitment scheme. Finally, we give lower bounds on the communication (size of update messages) required for oblivious accumulators and almost-oblivious accumulators.
Expand
Enrique Larraia, Owen Vaughan
ePrint Report ePrint Report
In this paper, we present a novel method for timestamping and data notarisation on a distributed ledger. The problem with on-chain hashes is that a cryptographic hash is a deterministic function that it allows the blockchain be used as an oracle that confirms whether potentially leaked data is authentic (timestamped or notarised by the user). Instead, we suggest using on-chain Pedersen commitments and off-chain zero knowledge proofs (ZKP) for designated verifiers to prove the link between the data and the on-chain commitment.

Our technique maintains the privacy of the data, and retains control of who can access it and when they can access it. This holds true even on a public blockchain, and even if the data is leaked by authorised parties. Indeed, an authorised data consumer (a designated-verifier for the ZKP), who discloses the data publicly, cannot convince anyone about the legitimacy of the data (in the sense that it is consistent with the information uploaded to the blockchain), because the ZKP proof is valid only for them.

Our techniques can be used in scenarios where it is required to audit highly-sensitive data (e.g. application logs) by specific third parties, or to provide on-demand data certification by notaries
Expand

28 June 2023

Zurich, Switzerland, 27 May - 30 May 2024
Eurocrypt Eurocrypt
Event date: 27 May to 30 May 2024
Expand

27 June 2023

Syed Zair Abbas, Mudassar Aslam
ePrint Report ePrint Report
With the advancement in technology, Cloud computing always amazes the world with revolutionizing solutions that automate and simplify complex computational tasks. The advantages like no maintenance cost, accessibility, data backup, pay-per-use models, unlimited storage, and processing power encourage individuals and businesses to migrate their workload to the cloud. Despite the numerous advantages of cloud computing, the geolocation of data in the cloud environment is a massive concern, which relates to the performance and government legislation that will be applied to data. The unclarity of data geolocation can cause compliance concerns. In this work, we have presented a technique that will allow users to restrict the geolocation of their data in the cloud environment. We have used trusted computing mechanisms to attest the host and its geolocation remotely. With this model, the user will upload the data whose decryption key will be shared with a third-party attestation server only. The decryption key will be sealed to the TPM of the host after successful attestation guaranteeing the authorized geolocation and platform state.
Expand
Offir Friedman, Avichai Marmor, Dolev Mutzari, Yehonatan C. Scaly, Yuval Spiizer, Avishay Yanai
ePrint Report ePrint Report
In the threshold version of Paillier's encryption scheme a set of parties hold a share of the secret decryption key. Whenever a ciphertext is to be decrypted, the parties sends their decryption shares, which are then verified for correctness and combined into the plaintext. The scheme has been widely adopted in various applications, from secure voting to general purpose MPC protocols. However, among handful proposals for a maliciously secure scheme, one must choose between an efficient implementation that relies on non-standard assumptions or an infeasible one that relies on widely acceptable assumptions.

In this work, we present a new protocol that combines the benefits of both worlds. We depart from the efficient scheme, which was proven secure relying on non-standard assumptions, and for the first time, prove that it is secure under standard assumptions only. This is possible thanks to a novel reduction technique, from the soundness of a zero-knowledge proof of equality of discrete logs, to the factoring problem. Furthermore, our simple and efficient proof supports batching, and hence enables batched threshold Paillier decryption for the first time.

Until now, verifying that a decryption share is correct was the bottleneck of threshold Paillier schemes, and prevented its implementation in practice (unless one is willing to rely on a trusted dealer). Our new proof and batching techniques shift that bottleneck back to the plaintext reconstruction, just like in the semi-honest setting, and render threshold Paillier practical for the first time, supporting large scale deployments.

We implemented our scheme and report our evaluation with up to 1000 parties, in the dishonest majority setting. For instance, over an EC2 C6i machine, we get a throughput of about 50 and 3.6 decryptions per second, when run over a network of 100 and 1000 parties, respectively.
Expand
Alain Couvreur, Ilaria Zappatore
ePrint Report ePrint Report
In this article, we discuss the decoding of Gabidulin and related codes from a cryptographic point of view, and we observe that these codes can be decoded solely from the knowledge of a generator matrix. We then extend and revisit Gibson and Overbeck attacks on the generalized GPT encryption scheme (instantiated with the Gabidulin code) for different ranks of the distortion matrix. We apply our attack to the case of an instantiation with twisted Gabidulin codes.
Expand
Carsten Baum, Lennart Braun, Cyprien Delpech de Saint Guilhem, Michael Klooß, Emmanuela Orsini, Lawrence Roy, Peter Scholl
ePrint Report ePrint Report
We present a new method for transforming zero-knowledge protocols in the designated verifier setting into public-coin protocols, which can be made non-interactive and publicly verifiable. Our transformation applies to a large class of ZK protocols based on oblivious transfer. In particular, we show that it can be applied to recent, fast protocols based on vector oblivious linear evaluation (VOLE), with a technique we call VOLE-in-the-head, upgrading these protocols to support public verifiability. Our resulting ZK protocols have linear proof size, and are simpler, smaller and faster than related approaches based on MPC-in-the-head.

To build VOLE-in-the-head while supporting both binary circuits and large finite fields, we develop several new technical tools. One of these is a new proof of security for the SoftSpokenOT protocol (Crypto 2022), which generalizes it to produce certain types of VOLE correlations over large fields. Secondly, we present a new ZK protocol that is tailored to take advantage of this form of VOLE, which leads to a publicly verifiable VOLE-in-the-head protocol with only 2x more communication than the best, designated-verifier VOLE-based protocols.

We analyze the soundness of our approach when made non-interactive using the Fiat-Shamir transform, using round-by-round soundness. As an application of the resulting NIZK, we present FAEST, a post-quantum signature scheme based on AES. FAEST is the first AES-based signature scheme to be smaller than SPHINCS+, with signature sizes between 5.6 and 6.6kB at the 128-bit security level. Compared with the smallest version of SPHINCS+ (7.9kB), FAEST verification is slower, but the signing times are between 8x and 40x faster.
Expand
Gustavo Banegas, Valerie Gilchrist, Anaëlle Le Dévéhat, Benjamin Smith
ePrint Report ePrint Report
Consider the problem of efficiently evaluating isogenies $\phi: \mathcal{E} \to \mathcal{E}/H$ of elliptic curves over a finite field $\mathbb{F}_q$, where the kernel \(H = \langle{G}\rangle\) is a cyclic group of odd (prime) order: given \(\mathcal{E}\), \(G\), and a point (or several points) $P$ on $\mathcal{E}$, we want to compute $\phi(P)$. This problem is at the heart of efficient implementations of group-action- and isogeny-based post-quantum cryptosystems such as CSIDH. Algorithms based on Vélu's formul\ae{} give an efficient solution to this problem when the kernel generator $G$ is defined over $\mathbb{F}_q$. However, for general isogenies, \(G\) is only defined over some extension $\mathbb{F}_{q^k}$, even though $\langle{G}\rangle$ as a whole (and thus \(\phi\)) is defined over the base field $\mathbb{F}_q$; and the performance of Vélu-style algorithms degrades rapidly as $k$ grows. In this article we revisit the isogeny-evaluation problem with a special focus on the case where $1 \le k \le 12$. We improve Vélu-style isogeny evaluation for many cases where \(k = 1\) using special addition chains, and combine this with the action of Galois to give greater improvements when \(k > 1\).
Expand
Asuka Wakasugi, Mitsuru Tada
ePrint Report ePrint Report
Code-Based Cryptosystem, CBC, is one of the candidates for Post-Quantum Cryptosystems, PQCs. Its security primarily bases on the Syndrome Decoding Problem, SDP. In this paper, we focus on the rank CBC whose security relies on the rank SDP. The GRS (Gaborit-Ruatta-Schrek) algorithm is well known as the current best decoding algorithm for the rank SDP. We propose the quantum version of the GRS algorithm. Then, we introduce the attack strategy using that quantum algorithm for previous rank CBCs remained at the 2nd Round of the NIST's PQC standardization project, and consider the quantum security for those cryptosystems. We present a result that is effective for RQC by our attack method, so give new RQC's instances which is secure against that attack.
Expand
Zhengjun Cao, Lihua Liu
ePrint Report ePrint Report
We show that the key agreement scheme [Multim. Tools Appl. 80:799-829, 2021] is flawed. (1) The scheme is a hybrid which piles up various tools such as public key encryption, signature, symmetric key encryption, hash function, cancelable templates from thumb fingerprints, and elliptic curve cryptography. These tools are excessively used because key agreement is just a simple cryptographic primitive in contrast to public key encryption. (2) The involved reliance is very intricate. Especially, the requirement for a secure channel between two parties is generally unavailable.
Expand

26 June 2023

Technical University of Denmark, greater Copenhagen area
Job Posting Job Posting
We are looking for two motivated PhD students for projects in provable post-quantum security of e.g. the sponge construction and fiat shamir signatures. The envisioned projects will use quantum techniques to tackle post-quantum security questions. The positions are at the technical University of Denmark, where you will be part of a rapidly growing cryptography research environment.

Closing date for applications:

Contact: Christian Majenz (chmaj@dtu.dk)

More information: https://efzu.fa.em2.oraclecloud.com/hcmUI/CandidateExperience/da/sites/CX_1/requisitions/preview/1983/?lastSelectedFacet=LOCATIONS&selectedLocationsFacet=300000000228648

Expand
Paderborn University
Job Posting Job Posting
In the Faculty of Computer Science, Electrical Engineering and Mathematics, the Institute of Computer Science – Department "Codes and Cryptography" – within this scope, we invite applications for the following fixed-term position (100% of the regular working time), which will start at the earliest opportunity:

Research Assistant (f/m/d) (salary is according to E13 TV-L)

The position is embedded in the project “Photonic Quantum Computing (PhoQC)” of the “Ministerium für Kultur und Wissenschaft” of the state of Northrhine-Westfalia (MKW NRW).

It is a full-time qualification position to support a PhD-project. The initial contract is for three years but limited to the duration of the PhD-project. An extension to finish the PhD is possible in accordance with the rules of the WissZeitVG.

Your duties and responsibilites:

• Complexity theoretic analysis of Boson-Sampling-like experiments
• Development of quantum algorithms applying Boson Sampling to solve practical tasks
• Development and analysis of theoretical frameworks for universal quantum computing based on quantum photonics
• Teaching on the order of 4 teaching hours (SWS) per week

Hiring requirement:

• scientific Master degree in computer science, mathematics or physics

It is expected for the successful candidate to have an established academic profile and previous experience in at least one of the following areas:

• Quantum algorithms
• Quantum complexity theory
• Theoretical quantum photonics

Closing date for applications:

Contact: Please send your application including a CV using the Ref. No. 5979 to: bloemer@upb.de. Information regarding the processing of your person data can be located at: https://www.uni-paderborn.de/en/zv/personaldatenschutz.

More information: https://www.uni-paderborn.de/fileadmin/zv/4-4/stellenangebote/Kennziffer5979_-_Englisch.pdf

Expand
Shahla Atapoor, Karim Baghery, Daniele Cozzo, Robi Pedersen
ePrint Report ePrint Report
Non-Interactive Verifiable Secret Sharing (NI-VSS) is a technique for distributing a secret among a group of individuals in a verifiable manner, such that shareholders can verify the validity of their received share and only a specific number of them can access the secret. VSS is a fundamental tool in cryptography and distributed computing. In this paper, we present an efficient NI-VSS scheme using Zero-Knowledge (ZK) proofs on secret shared data. While prior VSS schemes have implicitly used ZK proofs on secret shared data, we specifically use their formal definition recently provided by Boneh et al. in CRYPTO 2019. Our proposed NI-VSS scheme uses a quantum random oracle and a quantum computationally hiding commitment scheme in a black-box manner, which ensures its ease of use, especially in post-quantum threshold protocols. The practicality of the proposed NI-VSS is confirmed by our implementation results, establishing it as a viable choice for large-scale threshold protocols. Using the proposed NI-VSS scheme, a dealer can share a secret with 4096 parties in less than 2 seconds, and the shareholders can verify the validity of their shares in less than 2 milliseconds. We demonstrate the potential of new NI-VSS scheme by revisiting several threshold protocols and improving their efficiency. Specifically, we present two DKG protocols for CSIDH-based primitives, that outperform the current state of the art. Furthermore, we show similar improvements in some threshold signatures built based on Schnorr and CSI-FiSh signature schemes. We think, due to its remarkable efficiency and ease of use, the new NI-VSS scheme can be a valuable tool for a wide range of threshold protocols.
Expand
◄ Previous Next ►