International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

10 November 2022

Arantxa Zapico, Ariel Gabizon, Dmitry Khovratovich, Mary Maller, Carla Ràfols
ePrint Report ePrint Report
We present Baloo, the first protocol for lookup tables where the prover work is linear on the amount of lookups and independent of the size of the table. Baloo is built over the lookup arguments of Caulk and Caulk+, and the framework for linear relations of Rafols and Zapico. Our protocol supports commit-and-prove expansions: the prover selects the subtable containing the elements used in the lookup, that is unknown to the verifier, commits to it and later prove relation with the committed element. This feature makes Baloo especially suitable for prover input-ouput relations on hash functions, and in particular to instantiate the Ethereum Virtual Machine (EVM).
Expand
Pranav Verma, Anish Mathuria, Sourish Dasgupta
ePrint Report ePrint Report
The existing works on privacy-preserving recommender systems based on homomorphic encryption do not filter top-k most relevant items on the server side. As a result, sending the encrypted rating vector for all items to the user retrieving the top-k items is necessary. This incurs significant computation and communication costs on the user side.

In this work, we employ private sorting at the server to reduce the user-side overheads. In private sorting, the values and corresponding positions of elements must remain private. We use an existing private sorting protocol by Foteini and Olga and tailor it to the privacy-preserving top-k recommendation applications. We enhance it to use secure bit decomposition in the private comparison routine of the protocol. This leads to a notable reduction in cost overheads of users as well as the servers, especially at the keyserver where the computation cost is reduced to half. The dataserver does not have to perform costly encryption and decryption operations. It performs computationally less expensive modular exponentiation operations. Since the private comparison operation contributes significantly to the overall cost overhead, making it efficient enhances the sorting protocol’s performance. Our security analysis concludes that the proposed scheme is as secure as the original protocol.
Expand
Bhuvnesh Chaturvedi, Anirban Chakraborty, Ayantika Chatterjee, Debdeep Mukhopadhyay
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) promises to secure our data on the untrusted cloud, while allowing arbitrary computations. Present research shows that while there are pos- sibilities of side channel exploitations on the client side targeting the encryption or key-generation processes, the encrypted data on the cloud is secure against practical attacks. The current paper shows that it is possible for adversaries to inject perturbations in the ciphertexts stored in the cloud to result in decryption errors. Most importantly, we highlight that when the client reports of such aberrations to the cloud service provider the complete secret key can be extracted in few attempts. Technically, this implies a break of the IND-CVA (Indistinguishability against Ciphertext Verification Attacks) security of the FHE schemes. The underlying core methodology of the attack is to exploit the dependence of the error in the ciphertexts to the timing of homomorphic computations. These correlations can lead to timing templates which when used in conjunction with the error- induced decryption errors as reported by the client can lead to an accurate estimation of the ciphertext errors. As the security of the underlying Learning with Errors (LWE) collapse with the leakage of the errors, the adversary is capable of ascertaining the secret keys. We demonstrate this attack on two well-known FHE libraries, namely FHEW and TFHE, where we need 7, 23 and 28 queries to the client for each error recovery respectively. We mounted full key recovery attack on TFHE (without and with bootstrapping) and FHEW with key sizes 630 and 500 bits with 1260, 703 and 1003 correct errors and 31948, 21273 and 9073 client queries respectively.
Expand
Jack Cable, Andrés Fábrega, Sunoo Park, Michael A. Specter
ePrint Report ePrint Report
Voter registration is an essential part of almost any election process, and its security is a critical component of election security. Yet, despite notable compromises of voter registration systems, relatively little academic work has been devoted to securing voter registration systems, compared to research on other aspects of election security. In this paper, we present a systematic treatment of voter registration system security. We propose the first rigorous definitional framework for voter registration systems, describing the entities and core functionalities inherent in most voter registration systems, the jurisdictional policies that constrain specific implementations, and key security properties. Our definitions are configurable based on jurisdiction-specific parameters and policies. We provide a template for the structured presentation of detailed jurisdictional policy information, via a series of tables, and illustrate its application with detailed case studies of the voter registration systems of three U.S. states and Panama. Throughout our research, with the aim of realism and practical applicability, we consulted current and former U.S. election officials, civil society, and non-profits in the elections space. We conclude with a list of critical questions regarding voter registration security.
Expand
Pranav Jangir, Nishat Koti, Varsha Bhat Kukkala, Arpita Patra, Bhavish Raj Gopal, Somya Sangal
ePrint Report ePrint Report
Consider the problem of securely identifying τ -heavy hitters, where given a set of client inputs, the goal is to identify those inputs which are held by at least τ clients in a privacy-preserving manner. Towards this, we design a novel system Vogue, whose key highlight in comparison to prior works, is that it ensures complete privacy and does not leak any information other than the heavy hitters. In doing so, Vogue aims to achieve as efficient a solution as possible. To showcase these efficiency improvements, we benchmark our solution and observe that it requires around 14 minutes to compute the heavy hitters for τ = 900 on 256-bit inputs when considering 400K clients. This is in contrast to the state of the art solution that requires over an hour for the same. In addition to the static input setting described above, Vogue also accounts for streaming inputs and provides a protocol that outperforms the state-of-the-art therein. The efficiency improvements witnessed while computing heavy hitters in both, the static and streaming input settings, are attributed to our new secure stable compaction protocol, whose round complexity is independent of the size of the input array to be compacted
Expand
Shany Ben-David, Yael Tauman Kalai, Omer Paneth
ePrint Report ePrint Report
A computational PIR scheme allows a client to privately query a database hosted on a single server without downloading the entire database. We introduce the notion of verifiable PIR (vPIR) where the server can convince the client that the database satisfies certain properties without additional rounds and while keeping the communication sub-linear. For example, the server can prove that the number of rows in the database that satisfy a predicate $P$ is exactly $n$.

We define security by modeling vPIR as an ideal functionality and following the real-ideal paradigm. Starting from a standard PIR scheme, we construct a vPIR scheme for any database property that can be verified by a machine that reads the database once and maintains a bounded size state between rows. We also construct vPIR with public verification based on LWE or on DLIN. The main technical hurdle is to demonstrate a simulator that extracts a long input from an adversary that sends a single short message.

Our vPIR constructions are based on the notion of batch argument for NP. As contribution of independent interest, we show that batch arguments are equivalent to quasi-arguments---a relaxation of SNARKs which is known to imply succinct argument for various sub-classes of NP.
Expand
Tung Chou, Ruben Niederhagen, Edoardo Persichetti, Tovohery Hajatiana Randrianarisoa, Krijn Reijnders, Simona Samardjiska, Monika Trimoska
ePrint Report ePrint Report
In this paper, we show how to use the Matrix Code Equivalence (MCE) problem as a new basis to construct signature schemes. This extends previous work on using isomorphism problems for signature schemes, a trend that has recently emerged in post-quantum cryptography. Our new formulation leverages a more general problem and allows for smaller data sizes, achieving competitive performance and great flexibility. Using MCE, we construct a zero-knowledge protocol which we turn into a signature scheme named Matrix Equivalence Digital Signature (MEDS). We provide an initial choice of parameters for MEDS, tailored to NIST's Category 1 security level, yielding public keys as small as 2.7 kB and signatures ranging from 18.8 kB to just around 10 kB, along with a reference implementation in C.
Expand
Akinori Hosoyamada
ePrint Report ePrint Report
This paper shows how to achieve quantum speed-up for multidimensional (zero correlation) linear and integral distinguishers. To understand post-quantum security of symmetric-key cryptosystems, it is important to study how much quantum speed-up we can obtain for classical cryptanalytic techniques such as differential, linear, and integral cryptanalysis. A previous work by Kaplan et al. already showed a quantum quadratic speed-up for one-dimensional linear distinguishers, but it is unclear how to extend their technique to multidimensional linear distinguishers. To remedy this, we investigate how to speed-up multidimensional linear distinguishers in the quantum setting. Firstly, we observe that there is a close relationship between the subroutine of Simon's algorithm and linear correlations via Fourier transform, and a slightly modified version of Simon's subroutine can be used to speed-up multidimensional linear distinguishers. The modified Simon's subroutine also leads to speed-ups for multidimensional zero correlation and some integral distinguishers. Surprisingly, our technique achieves more-than-quadratic speed-ups for some special types of integral distinguishers. This is because the modified Simon's subroutine can exploit the existence of multiple multidimensional zero correlation linear approximations. Our attacks are the first examples achieving such speed-up on classical cryptanalytic techniques without relying on any algebraic structures such as hidden periods or shifts. The speed-ups for multidimensional (zero correlation) linear distinguishers are at-most-quadratic, and all of our attacks require quantum superposition queries.
Expand
Kunming Jiang, Devora Chait-Roth, Zachary DeStefano, Michael Walfish, Thomas Wies
ePrint Report ePrint Report
There has been intense interest over the last decade in implementations of _probabilistic proofs_ (IPs, SNARKs, PCPs, and so on): protocols in which an untrusted party proves to a verifier that a given computation was executed properly, possibly in zero knowledge. Nevertheless, implementations still do not scale beyond small computations. A central source of overhead is the _front-end_: translating from the abstract computation to a set of equivalent arithmetic constraints. This paper introduces a general-purpose framework, called Distiller, in which a user translates to constraints not the original computation but an abstracted _specification_ of it. Distiller is the first in this area to perform such transformations in a way that is provably safe. Furthermore, by taking the idea of "encode a check in the constraints" to its literal logical extreme, Distiller exposes many new opportunities for constraint reduction, resulting in cost reductions for benchmark computations of 1.3–50$\times$, and in some cases, better asymptotics.
Expand
Dimitrios Sikeridis, Sean Huntley, David Ott, Michael Devetsikiotis
ePrint Report ePrint Report
Quantum computing advances threaten the security of today's public key infrastructure, and have led to the pending standardization of alternative, quantum-resistant key encapsulation and digital signature cryptography schemes. Unfortunately, authentication algorithms based on the new post-quantum (PQ) cryptography create significant performance bottlenecks for TLS due to larger certificate chains which introduce additional packets and round-trips. The TLS handshake slowdown will be unacceptable to many applications, and detrimental to the broader adoption of quantum safe cryptography standards. In this paper, we propose a novel framework for Intermediate Certificate Authority (ICA) certificate suppression in TLS that reduces the authentication message size and prevents excessive round-trip delays. Our approach utilizes an approximate membership query (AMQ) data structure (probabilistic filter) to advertise known ICA certs to remote TLS endpoints so that unnecessary ICA certificates are omitted from the TLS handshake exchange. We showcase the extend of the PQ authentication overhead challenge in TLS, and evaluate the feasibility of AMQ filters for ICA suppression in terms of space and computational overhead. Finally, we experimentally evaluate the potential gains form our approach and showcase a $70\%$ reduction in exchanged ICA cert data that translates to 15-50 MB of savings in PQ TLS and for certain Web-based application scenarios.
Expand
Sunpreet S. Arora, Saikrishna Badrinarayanan, Srinivasan Raghuraman, Maliheh Shirvanian, Kim Wagner, Gaven Watson
ePrint Report ePrint Report
Passwords are difficult to remember, easy to guess and prone to hacking. While there have been several attempts to solve the aforementioned problems commonly associated with passwords, one of the most successful ones to date has been by the Fast Identity Online (FIDO) alliance. FIDO introduced a series of protocols that combine local authentication on a user device with remote validation on relying party servers using public-key cryptography. One of the fundamental problems of FIDO protocols is complete reliance on a single user device for authentication. More specifically, the private key used for signing relying party challenges can only be stored on a single device. Each FIDO authenticator key is linked uniquely to an account with a relying party service. As a result a lost or stolen user device necessitates creation of new user account, using a new device, with each (previously enrolled) relying party service. To overcome this limitation, we introduce a dynamic managerless group signature scheme that organizes authenticators into groups. Each authenticator in a group has a unique private key that links it to an account with a relying party, which can sign relying party challenges. The relying party server has a group verification key that can validate challenges signed using the private key of any authenticator in a group. Our approach provides additional redundancy and usability to the FIDO protocol whilst still achieving the security properties expected in the FIDO setting such as unforgeability and unlinkability.
Expand
Christos Stefo, Zhuolun Xiang, Lefteris Kokoris-Kogias
ePrint Report ePrint Report
Scaling blockchain protocols to perform on par with the expected needs of Web3.0 has been proven to be a challenging task with almost a decade of research. In the forefront of the current solution is the idea of separating the execution of the updates encoded in a block from the ordering of blocks. In order to achieve this, a new class of protocols called rollups has emerged. Rollups have as input a total ordering of valid and invalid transactions and as output a new valid state-transition.

If we study rollups from a distributed computing perspective, we uncover that rollups take as input the output of a Byzantine Atomic Broadcast (BAB) protocol and convert it to a State Machine Replication (SMR) protocol. BAB and SMR, however, are considered equivalent as far as distributed computing is concerned and a solution to one can easily be retrofitted to solve the other simply by adding/removing an execution step before the validation of the input.

This ``easy'' step of retrofitting an atomic broadcast solution to implement an SMR has, however, been overlooked in practice. In this paper, we formalize the problem and show that after BAB is solved, traditional impossibility results for consensus no longer apply towards an SMR. Leveraging this we propose a distributed execution protocol that allows reduced execution and storage cost per executor ($O(\frac{log^2n}{n})$) without relaxing the network assumptions of the underlying BAB protocol and providing censorship-resistance. Finally, we propose efficient non-interactive light client constructions that leverage our efficient execution protocols and do not require any synchrony assumptions or expensive ZK-proofs.
Expand
Giuseppe Persiano, Kevin Yeo
ePrint Report ePrint Report
In recent years, there has been significant work in studying data structures that provide privacy for the operations that are executed. These primitives aim to guarantee that observable access patterns to physical memory do not reveal substantial information about the queries and updates executed on the data structure. Multiple recent works, including Larsen and Nielsen [Crypto'18], Persiano and Yeo [Eurocrypt'19], Hubáček et al. [TCC'19] and Komargodski and Lin [Crypto'21], have shown that logarithmic overhead is required to support even basic RAM (array) operations for various privacy notions including obliviousness and differential privacy as well as different choices of sizes for RAM blocks $b$ and memory cells $\omega$.

We continue along this line of work and present the first logarithmic lower bounds for differentially private RAMs (DPRAMs) that apply regardless of the sizes of blocks $b$ and cells $\omega$. This is the first logarithmic lower bounds for DPRAMs when blocks are significantly smaller than cells, that is $b \ll \omega$. Furthermore, we present new logarithmic lower bounds for differentially private variants of classical data structure problems including sets, predecessor (successor) and disjoint sets (union-find) for which sub-logarithmic plaintext constructions are known. All our lower bounds extend to the multiple non-colluding servers setting.

We also address an unfortunate issue with this rich line of work where the lower bound techniques are difficult to use and require customization for each new result. To make the techniques more accessible, we generalize our proofs into a framework that reduces proving logarithmic lower bounds to showing that a specific problem satisfies two simple, minimal conditions. We show our framework is easy-to-use as all the lower bounds in our paper utilize the framework and hope our framework will spur more usage of these lower bound techniques.
Expand
Siwei Sun, Tianyu Liu, Zhi Guan, Yifei He, Jiwu Jing, Lei Hu, Zhenfeng Zhang, Hailun Yan
ePrint Report ePrint Report
We instantiate the hash-based post-quantum stateful signature schemes XMSS and its multi-tree version described in RFC 8391 and NIST SP 800-208 with SM3, and report on the results of the preliminary performance test.
Expand

08 November 2022

Election Election
This is a reminder that the 2022 IACR Election is open. Last day to vote is Nov. 15th.

More details can be found here: https://www.iacr.org/elections/2022/candidates.php
Expand
Emiliano De Cristofaro, Paolo Gasti, Gene Tsudik
ePrint Report ePrint Report
In many everyday scenarios, sensitive information must be shared between parties without complete mutual trust. Private set operations are particularly useful to enable sharing information with privacy, as they allow two or more parties to jointly compute operations on their sets (e.g., intersection, union, etc.), such that only the minimum required amount of information is disclosed. In the last few years, the research community has proposed a number of secure and efficient techniques for Private Set Intersection (PSI), however, somewhat less explored is the problem of computing the magnitude, rather than the contents, of the intersection - we denote this problem as Private Set Intersection Cardinality (PSI-CA). This paper explores a few PSI-CA variations and constructs several protocols that are more efficient than the state-of-the-art.
Expand
Michele Battagliola, Riccardo Longo, Alessio Meneghetti
ePrint Report ePrint Report
Starting from links between coding theory and secret sharing we develop an extensible and decentralized version of Shamir Secret Sharing, that allows the addition of new users after the initial shares distribution.

On top of it we design a totally decentralized $(t,n)$-threshld Schnorr signature scheme that needs only $t$ users online during the key generation phase, while the others join later. Under standard assumptions we prove our scheme secure against adaptive malicious adversaries. Furthermore, we show how our security notion can be strengthen when considering a rushing adversary. Using a classical game-based argument, we prove that if there is an adversary capable of forging the scheme with non-negligible probability, then we can build a forger for the centralized Schnorr scheme with non-negligible probability.
Expand
Kaisa Nyberg
ePrint Report ePrint Report
Various systematic modifications of vectorial Boolean functions have been used for finding new previously unknown classes of S-boxes with good or even optimal differential uniformity and nonlinearity. In this paper, a new general modification method is given that preserves the bijectivity property of the function in case the inverse of the function admits a linear structure. A previously known construction of such a modification based on bijective Gold functions in odd dimension is a special case of the new method.
Expand
Yingying Li, Qichun Wang
ePrint Report ePrint Report
In this paper, we show that it is inaccurate to apply the hypothesis of independent round keys to search for differential characteristics of a block cipher with a simple key schedule. Therefore, the derived differential characteristics may be valid. We develop a SAT-based algorithm to verify the validity of differential characteristics. Furthermore, we take the key schedule into account and thus put forward an algorithm to directly find the valid differential characteristics. All experiments are performed on Midori64 and we find some interesting results.
Expand
Simon Langowski, Sacha Servan-Schreiber, Srinivas Devadas
ePrint Report ePrint Report
Trellis is a mix-net based anonymous broadcast system with cryptographic security guarantees. Trellis can be used to anonymously publish documents or communicate with other users, all while assuming full network surveillance. In Trellis, users send messages through a set of servers in successive rounds. The servers mix and post the messages to a public bulletin board, hiding which users sent which messages.

Trellis hides all network metadata, remains robust to changing network conditions, guarantees availability to honest users, and scales with the number of mix servers. Trellis provides three to five orders of magnitude faster performance and better network robustness compared to Atom, the state-of-the-art anonymous broadcast system with a comparable threat model.

In achieving these guarantees, Trellis contributes: (1) a simpler theoretical mixing analysis for a routing mix network constructed with a fraction of malicious servers, (2) anonymous routing tokens for verifiable random paths, and (3) lightweight blame protocols built on top of onion routing to identify and eliminate malicious parties.

We implement and evaluate Trellis in a networked deployment. With 128 servers, Trellis achieves a throughput of 320 bits per second. Trellis’s throughput is only 100 to 1000× slower compared to Tor (which has 6,000 servers and 2 million daily users) and is potentially deployable at a smaller “enterprise” scale. Our implementation is open-source.
Expand
◄ Previous Next ►