## IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

There is currently a problem with the jobs channel, and new jobs listings are not appearing here. Please see the jobs page.

#### 05 November 2019

###### Ameirah al Abdouli, Emanuele Bellini, Florian Caullery, Marc Manzano, Victor Mateu

ePrint Report
Since its invention by McEliece in 1978, cryptography based on Error Correcting Codes (ECC) has suffered from the reputation of not being suitable for constrained devices. Indeed, McEliece's scheme and its variants have large public keys and relatively long ciphertexts.
Recent works on these downsides explored the possible use of ECC based on rank metric instead of Hamming metric.
These codes were introduced in the late 80's to eliminate errors with repeating patterns, regardless of their Hamming weight.
Numerous proposals for the NIST Post-Quantum Cryptography (PQC) competition rely on these codes.
It has been proven that lattice-based cryptography and even hash-based signatures can run on lightweight devices,
but the question remains for code-based cryptography.
In this work, we demonstrate that this is actually possible for rank metric:
we have implemented the encryption operation of 5 schemes based on ECC in rank metric and made them run on an Arm Cortex-M0 processor, the smallest Arm processor available.
We describe the technical difficulties of porting rank-based cryptography to a resource-constrained device while maintaining decent performance and a suitable level of security against side-channel attacks, especially timing attacks.

###### Jens-Peter Kaps, William Diehl, Michael Tempelmeier, Farnoud Farahmand, Ekawat Homsirikamol, Kris Gaj

ePrint Report
In this paper, we propose a comprehensive framework for fair and efficient benchmarking of hardware implementations of lightweight cryptography (LWC). Our framework is centered around the hardware API (Application Programming Interface) for the implementations of lightweight authenticated ciphers, hash functions, and cores combining both functionalities. The major parts of our API include the minimum compliance criteria, interface, and communication protocol supported by the LWC core. The proposed API is intended to meet the requirements of all candidates submitted to the NIST Lightweight Cryptography standardization process, as well as all CAESAR candidates and current authenticated cipher and hash function standards. In order to speed-up the development of hardware implementations compliant with this API, we are making available the LWC Development Package and the corresponding Implementer’s Guide. Equipped with these resources, hardware designers can focus on implementing only a core functionality of a given algorithm. The development package facilitates the communication with external modules, full verification of the LWC core using simulation, and generation of optimized results. The proposed API for lightweight cryptography is a superset of the CAESAR Hardware API, endorsed by the organizers of the CAESAR competition, which was successfully used in the development of over 50 implementations of Round 2 and Round 3 CAESAR candidates. The primary extensions include support for optional hash functionality and the development of cores resistant against side-channel attacks. Similarly, the LWC Development Package is a superset of the part of the CAESAR Development Package responsible for support of Use Case 1 (lightweight) CAESAR candidates. The primary extensions include support for hash functionality, increasing the flexibility of the code shared among all candidates, as well as extended support for the detection of errors preventing the correct operation of cores during experimental testing. Overall, our framework supports (a) fair ranking of candidates in the NIST LWC standardization process from the point of view of their efficiency in hardware before and after the implementation of countermeasures against side-channel attacks, (b) ability to perform benchmarking within the limited time devoted to Round2 and any subsequent rounds of the NIST LWC standardization process, (c) compatibility among implementations of the same algorithm by different designers and (d) fast deployment of the best algorithms in real-life applications.

###### Upendra Kapshikar, Ayan Mahalanobis

ePrint Report
McEliece and Niederreiter cryptosystems are robust and versatile cryptosystems. These cryptosystems work with any linear error-correcting codes. They are popular these days because they can be quantum-secure. In this paper, we study the Niederreiter cryptosystem using quasi-cyclic codes. We prove, if these quasi-cyclic codes satisfy certain conditions, the corresponding Niederreiter cryptosystem is resistant to the hidden subgroup problem using quantum Fourier sampling. Our proof requires the classification of finite simple groups.

###### Martin R. Albrecht, Alex Davidson, Amit Deo, Nigel P. Smart

ePrint Report
Verifiable Oblivious Pseudorandom Functions (VOPRFs) are protocols that allow a client to learn verifiable pseudorandom function (PRF) evaluations on inputs of their choice. The PRF evaluations are computed by a server using their own secret key. The security of the protocol prevents both the server from learning anything about the client's input, and likewise the client from learning anything about the server's key. VOPRFs have many applications including password-based authentication, secret-sharing, anonymous authentication and efficient private set intersection. In this work, we construct the first round-optimal (online) VOPRF protocol that retains security from well-known lattice hardness assumptions. Our protocol requires constructions of non-interactive zero-knowledge arguments of knowledge (NIZKAoK). For analogues of Stern-type proofs in the lattice setting, we show that our VOPRF may be securely instantiated in the quantum random oracle model. We construct such arguments as extensions of prior work in the area of lattice-based zero-knowledge proof systems.

###### Jiwon Lee, Jaekyoung Choi, Jihye Kim, Hyunok Oh

ePrint Report
In the pairing-based zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), there often exists a requirement for the proof system to be combined with encryption. As a typical example, a blockchain-based voting system requires the vote to be confidential (using encryption), while verifying voting validity (using zk-SNARKs). In this kind of combined applications, a general solution is to extend the zk-SNARK circuit to include the encryption code. However, complex cryptographic operations in the encryption algorithm increase the circuit size, which leads to impractically large proving time and the CRS size.

In this paper, we propose Snark-friendly, Additively-homomorphic, and Verifiable Encryption and decryption with Rerandomization or the SAVER, which is a novel approach to detach the encryption from the SNARK circuit. The encryption in SAVER holds many useful properties. It is SNARK-friendly: the encryption is conjoined with an existing pairing-based SNARK, in a way that the encryptor can prove pre-defined properties while encrypting the message apart from the SNARK. It is additively-homomorphic: the ciphertext holds a homomorphic property from the ElGamal-based encryption. It is verifiable encryption: one can verify arbitrary properties of encrypted messages by connecting with the SNARK system. It provides verifiable decryption: anyone without the secret can still verify that the decrypted message is indeed from the given ciphertext. It provides rerandomization: the proof and the ciphertext can be rerandomized as independent objects so that even the encryptor (or prover) herself cannot identify the origin.

For the representative application, we define and construct a voting system scenario and explain the necessity of each property in the SAVER. We prove the IND-CPA-security of the encryption, along with the soundness of encryption and decryption proofs. The experimental results show that the voting system designed from our SAVER yields 0.7s proving/encryption (voting) time, and 16MB-sized CRS for SNARK regardless of the message size.

In this paper, we propose Snark-friendly, Additively-homomorphic, and Verifiable Encryption and decryption with Rerandomization or the SAVER, which is a novel approach to detach the encryption from the SNARK circuit. The encryption in SAVER holds many useful properties. It is SNARK-friendly: the encryption is conjoined with an existing pairing-based SNARK, in a way that the encryptor can prove pre-defined properties while encrypting the message apart from the SNARK. It is additively-homomorphic: the ciphertext holds a homomorphic property from the ElGamal-based encryption. It is verifiable encryption: one can verify arbitrary properties of encrypted messages by connecting with the SNARK system. It provides verifiable decryption: anyone without the secret can still verify that the decrypted message is indeed from the given ciphertext. It provides rerandomization: the proof and the ciphertext can be rerandomized as independent objects so that even the encryptor (or prover) herself cannot identify the origin.

For the representative application, we define and construct a voting system scenario and explain the necessity of each property in the SAVER. We prove the IND-CPA-security of the encryption, along with the soundness of encryption and decryption proofs. The experimental results show that the voting system designed from our SAVER yields 0.7s proving/encryption (voting) time, and 16MB-sized CRS for SNARK regardless of the message size.

###### Hao Lin, Mingqiang Wang

ePrint Report
Ring signatures allow a person to generate a signature on behalf of an ad hoc group, and can hide the true identity of the signer among the group. Repudiable ring signatures are the more strongly defined ring signatures, which can allow every non-signer to prove to others that the signature was not generated by himself.

This paper has two main areas of focus. First, we propose a new requirement for repudiable ring signatures, which is that no one can forge a valid repudiation for others. Second, as a breakthrough, we present the first logarithmic-size repudiable ring signatures which do not rely on a trusted setup or the random oracle model. Specifically, our scheme can be instantiated from standard assumptions and the size of signatures and repudiations only grows logarithmically in the number of ring members.

Besides, our scheme also provides a new construction of logarithmic-size standard ring signatures.

This paper has two main areas of focus. First, we propose a new requirement for repudiable ring signatures, which is that no one can forge a valid repudiation for others. Second, as a breakthrough, we present the first logarithmic-size repudiable ring signatures which do not rely on a trusted setup or the random oracle model. Specifically, our scheme can be instantiated from standard assumptions and the size of signatures and repudiations only grows logarithmically in the number of ring members.

Besides, our scheme also provides a new construction of logarithmic-size standard ring signatures.

###### Saqib A. Kakvi

ePrint Report
The RSA Probabilistic Signature Scheme (RSA-PSS) due to Bellare and Rogaway (EUROCRYPT 1996) is a widely deployed signature scheme. In particular it is a suggested replacement for the deterministic RSA Full Domain Hash (RSA-FDH) by Bellare and Rogaway (ACM CCS 1993) and PKCS#1 v1.5 (RFC 2313), as it can provide stronger security guarantees. It has since been shown by Kakvi and Kiltz (EUROCRYPT 2012, Journal of Cryptology 2018) that RSA-FDH provides similar security to that of RSA-PSS, also in the case when RSA-PSS is not randomized. Recently, Jager, Kakvi and May (ACM CCS 2018) showed that PKCS#1 v1.5 provides comparable security to both RSA-FDH and RSA-PSS. However, all these proofs consider each signature scheme in isolation, where in practice this is not the case. The most interesting case is that in TLS 1.3, PKCS#1 v1.5 signatures are still included for reasons of backwards compatibility, meaning both RSA-PSS and PKCS#1 v1.5 signatures are implemented. To save space, the key material is shared between the two schemes, which means the aforementioned security proofs no longer apply. We investigate the security of this joint usage of key material in the context of Sibling Signatures, which were introduced by Camenisch, Drijvers, and Dubovitskaya (ACM CCS 2017). It must be noted that we consider the standardised version of RSA-PSS (IEEE Standard P1363-2000), which deviates from the original scheme considered in all previous papers. We are able to show that this joint usage is indeed secure, and achieves a security level that closely matches that of PKCS#1 v1.5 signatures and that both schemes can be safely used, if the output lengths of the hash functions are chosen appropriately.

###### Hao Lin, Mingqiang Wang

ePrint Report
Ring signatures allow a person to generate a signature on behalf of an ad hoc group, and can hide the true identity of the signer among the group. Repudiable ring signatures are the more strongly defined ring signatures, which can allow every non-signer to prove to others that the signature was not generated by himself.
This paper has two main areas of focus. First, we propose a new requirement for repudiable ring signatures, which is that no one can forge a valid repudiation for others. Second, as a breakthrough, we present the first logarithmic-size repudiable ring signatures which do not rely on a trusted setup or the random oracle model. Specifically, our scheme can be instantiated from standard assumptions and the size of signatures and repudiations only grows logarithmically in the number of ring members.
Besides, our scheme also provides a new construction of logarithmic-size standard ring signatures.

###### Jean Claude Bajard, Julien Eynard, Paulo Martins, Leonel Sousa, Vincent Zucca

ePrint Report
In a recent work, Al Badawi et al. have noticed a different behaviour of the noise growth in practice between the two RNS variants of BFV from Bajard et al. and Halevi et al. Their experiments, based on the PALISADE and SEAL libraries, have shown that the multiplicative depth reached, in practice, by the first one was considerably smaller than the second one while theoretically equivalent in the worst-case. Their interpretation of this phenomenon was that the approximations used by Bajard et al. made the expansion factor behave differently than what the Central Limit Theorem would predict.
We have realized that this difference actually comes from the implementation of the SmMRq procedure of Bajard et al. in SEAL and PALISADE which is slightly different than what Bajard et al. had proposed. In this note we show that by fixing this small difference, the multiplicative depth of both variants is actually the same in practice.

###### Jiajun Xin, Pei Huang, Lei Chen, Xin Lai, Xiao Zhang, Wulu Li, Yongcan Wang

ePrint Report
The Account-model-based blockchain system gains its popularity due to its ease of use and native support of smart contracts. However, the privacy of users now becomes a major concern and bottleneck of its further adoption because users are not willing to leaving permanent public records online. Conventionally, the privacy of users includes transaction confidentiality and anonymity. While confidentiality can be easily protected using confidential transaction technique, anonymity can be quite challenging in an account-model-based blockchain system, because every transaction in the system inevitably updates transaction sender's as well as receiver's account balance. Even when the privacy of a blockchain system is well-protected, however, regulation becomes a new challenge to counter for further adoption of this system.

In this paper, we introduce a novel transaction-mix protocol, which provides confidentiality and, moreover, anonymity to account-model-based blockchain systems. By leveraging the state of art verifiable shuffle scheme to construct a shuffle of confidential transactions, we build one practical anonymous confidential blockchain system, WaterCarver, upon account model with the help of confidential transactions, verifiable shuffle, and zero-knowledge proofs. We further provide an efficient and robust solution for dynamic regulation with multiple regulators. Our regulation method achieves flexibility and perfect forward secrecy. Experiments show that the overall Transactions Per Second (TPS) of our system can be as large as 600 on a simple desktop.

In this paper, we introduce a novel transaction-mix protocol, which provides confidentiality and, moreover, anonymity to account-model-based blockchain systems. By leveraging the state of art verifiable shuffle scheme to construct a shuffle of confidential transactions, we build one practical anonymous confidential blockchain system, WaterCarver, upon account model with the help of confidential transactions, verifiable shuffle, and zero-knowledge proofs. We further provide an efficient and robust solution for dynamic regulation with multiple regulators. Our regulation method achieves flexibility and perfect forward secrecy. Experiments show that the overall Transactions Per Second (TPS) of our system can be as large as 600 on a simple desktop.

###### Juan Garay, Aggelos Kiayias, Rafail Ostrovsky, Giorgos Panagiotakos, Vassilis Zikas

ePrint Report
Traditional bounds on synchronous Byzantine agreement (BA) and secure multi-party computation (MPC) establish that in absence of a private-coin correlated-randomness setup, such as a PKI, protocols can tolerate up to t<n/3 of the parties being malicious. The introduction of "Nakamoto style'' consensus, based on Proof-of-Work (PoW) blockchains, put forth a somewhat different flavor of BA, showing that even a majority of corrupted parties can be tolerated as long as the majority of the computation resources remain at honest hands. This assumption on honest majority of some resource was also extended to other resources such as stake, space, etc., upon which blockchains achieving Nakamoto-style consensus were built that violated the $t<n/3$ bound in terms of number of party corruptions. The above state of affairs begs the question of whether the seeming mismatch is due to different goals and models, or whether the resource-restricting paradigm can be generically used to circumvent the n/3 lower bound.

In this work we study this question and formally demonstrate how the above paradigm changes the rules of the game in cryptographic definitions. First, we abstract the core properties that the resource-restricting paradigm offers by means of a functionality *wrapper*, in the UC framework, which when applied to a standard point-to-point network restricts the ability (of the adversary) to send new messages. We show that such a wrapped network can be implemented using the resource-restricting paradigm---concretely, using PoWs and honest majority of computing power---and that the traditional $t<n/3$ impossibility results fail when the parties have access to such a network.

We then present constructions for BA and MPC, which given access to such a network tolerate $t<n/2$ corruptions without assuming a private correlated randomness setup, but merely a *fresh* Common Reference String (CRS)---i.e., a CRS which becomes available to the parties at the same time as to the adversary. We also show how to remove this freshness assumption by leveraging the power of a random oracle. Our MPC protocol achieves the standard notion of MPC security, where parties might have dedicated roles, as is for example the case in Oblivious Transfer protocols. This is in contrast to existing solutions basing MPC on PoWs, which associate roles to pseudonyms but do not link these pseudonyms with the actual parties.

In this work we study this question and formally demonstrate how the above paradigm changes the rules of the game in cryptographic definitions. First, we abstract the core properties that the resource-restricting paradigm offers by means of a functionality *wrapper*, in the UC framework, which when applied to a standard point-to-point network restricts the ability (of the adversary) to send new messages. We show that such a wrapped network can be implemented using the resource-restricting paradigm---concretely, using PoWs and honest majority of computing power---and that the traditional $t<n/3$ impossibility results fail when the parties have access to such a network.

We then present constructions for BA and MPC, which given access to such a network tolerate $t<n/2$ corruptions without assuming a private correlated randomness setup, but merely a *fresh* Common Reference String (CRS)---i.e., a CRS which becomes available to the parties at the same time as to the adversary. We also show how to remove this freshness assumption by leveraging the power of a random oracle. Our MPC protocol achieves the standard notion of MPC security, where parties might have dedicated roles, as is for example the case in Oblivious Transfer protocols. This is in contrast to existing solutions basing MPC on PoWs, which associate roles to pseudonyms but do not link these pseudonyms with the actual parties.

###### Anna Johnston

ePrint Report
Random data and the entropy contained within is a critical component of information security.
Minimum entropy is the base measurement defined by NIST and what many of their entropy tests are based on.
However minimum entropy does not satisfy the basic requirements for an entropy measurement in either Shannon's original document or in Renyi's generalization of entropy .
This document suggests a different way forward with a reclassification of entropy into two classes.
With this differentiation, measurement tools are simplified and allow for more accurate assessments of entropy.

###### Shweta Agrawal, Rachit Garg, Nishant Kumar, Manoj Prabhakaran

ePrint Report
We introduce the notion of a Functionally Encrypted Datastore which collects data from multiple data-owners, stores it encrypted on an untrusted server, and allows untrusted clients to make select-and-compute queries on the collected data. Little coordination and no communication is required among the data-owners or the clients. Our security and performance profile is similar to that of conventional searchable encryption systems, while the functionality we offer is significantly richer. The client
specifies a query as a pair (Q,f) where Q is a filtering predicate that selects some subset of the dataset and f is a function on some computable values associated with the selected data. We provide efficient protocols for various functionalities of practical relevance. We demonstrate the utility, efficiency, and scalability of our protocols via extensive experimentation. In particular, we use our protocols to model computations relevant to the Genome-Wide Association Studies such as Minor Allele Frequency
(MAF), Chi-square analysis and Hamming Distance.

###### Justin Holmgren

ePrint Report
We show that the recently introduced notion of round-by-round soundness for
interactive proofs (Canetti et al.; STOC 2019) is equivalent to the notion of
soundness against state restoration attacks (Ben-Sasson, Chiesa, and Spooner;
TCC 2016). We also observe that neither notion is implied by the
random-oracle security of the Fiat-Shamir transform.

###### Anita Aghaie, Amir Moradi

ePrint Report
One of the main motivations behind introducing PUFs was their ability to resist physical attacks. Among them, cloning was the major concern of related scientific literature. Several primitive PUF designs have been introduced to the community, and several machine learning attacks have been shown capable to model such constructions. Although a few works have expressed how to make use of Side-Channel Analysis (SCA) leakage of PUF constructions to significantly improve the modeling attacks, little attention has been payed to provide corresponding countermeasures.
In this paper, we present a generic technique to operate any PUF primitive in an SCA-secure fashion. We, for the first time, make it possible to apply a provably-secure masking countermeasure – Threshold Implementation (TI) – on a strong PUF design. As a case study, we concentrate on the Interpose PUF, and based on practical experiments on an FPGA prototype, we demonstrate the ability of our construction to prevent the recovery of intermediate values through SCA measurements.

###### Kaushik Nath, Palash Sarkar

ePrint Report
Within the Transport Layer Security (TLS) Protocol Version 1.3, RFC 7748 specifies elliptic curves targeted at the 128-bit and the 224-bit security levels. For the 128-bit security level, the Montgomery curve Curve25519 and its birationally equivalent twisted Edwards curve Ed25519 are specified, while for the 224-bit security level, the Montgomery curve Curve448 and its birationally equivalent Edwards curve Edwards448 are specified. The contribution of this work is to propose new pairs of Montgomery-Edwards curves at both the 128-bit and the 224-bit security levels. The new curves are nice in the sense that they have very small curve coefficients and base points. Compared to the curves in RFC 7748, the new curves lose two bits of security. The main advantage of the new curves over those in RFC 7748 is that for 64-bit implementation, all the reduction steps on the outputs of additions and subtractions in the ladder algorithm can be omitted. For 64-bit implementations on the Skylake and the Kaby Lake processors, about 21% improvement in speed is achieved at the 128-bit security level and about 28% improvement in speed is obtained at the 224-bit security level.

###### Shogo Ochiai, Keiichi Iwamura, Ahmad Akmal Aminuddin Mohd Kamal

ePrint Report
In recent years, the concept of Internet of Things (IoT) network used to enable everyday objects and electronic devices to communicate with each other has been extensively discussed. There are three main types of communication that can be assumed in an IoT network: unicast, group, and broadcast communication. Apart from that, Hamasaki et al. considered geometric characteristics and proposed a method of geometric group key sharing. Thereafter, a key sharing method suitable for sharing a pairwise key by implementing the method proposed by Hamasaki et al. had been proposed by Nishigami et al. However, testing this method, we found that when a node and its fellow nodes are attacked together, the keys of the rest of the nodes will be leaked. Therefore, in this paper, using the feature introduced in geometric group key sharing, we propose a method that enables a pairwise key to be securely shared. In addition, we extend our method of pairwise key sharing to be applicable for group key sharing to achieve a way to share efficiently pairwise, group, and global keys used in broadcast communication. Finally, we evaluate the efficiency of our proposed method.

###### Dhaval Khandla, Het Shahy, Manish Kumar Bz, Alwyn Roshan Pais, Nishant Raj

ePrint Report
Ciphertext-policy attribute-based encryption (CP-ABE) is a desirable scheme to use in cloud-based applications, especially on IoT devices. As most of these devices are battery-limited and memory-limited, leading to a constraint in designing a robust and straightforward mechanism involving less computation and less memory. But none of the systems are secure and based on conventional cryptosystems. Here we propose a constant-size secret key and constant-size ciphertext scheme based on RSA cryptosystem, which performs encryption and decryption in O(1) time complexity. We also prove that the scheme is secure and compare it with already existing schemes.

#### 04 November 2019

###### Calgary, Canada, 22 June - 26 June 2020

Event Calendar
Event date: 22 June to 26 June 2020

Submission deadline: 15 February 2020

Notification: 1 April 2020

Submission deadline: 15 February 2020

Notification: 1 April 2020

#### 28 October 2019

###### Elette Boyle, Justin Holmgren, Mor Weiss

ePrint Report
A permuted puzzle problem is defined by a pair of distributions $D_0,D_1$ over $S^n$. The problem is to distinguish samples from $D_0,D_1$, where the symbols of each sample are permuted by a single secret permutation $p$ of $[n]$.

The conjectured hardness of specific instances of permuted puzzle problems was recently used to obtain the first candidate constructions of Doubly Efficient Private Information Retrieval (DE-PIR) (Boyle et al. & Canetti et al., TCC'17). Roughly, in these works the distributions $D_0,D_1$ over $F^n$ are evaluations of either a moderately low-degree polynomial or a random function. This new conjecture seems to be quite powerful, and is the foundation for the first DE-PIR candidates, almost two decades after the question was first posed by Beimel et al. (CRYPTO'00). While permuted puzzles are a natural and general class of problems, their hardness is still poorly understood.

We initiate a formal investigation of the cryptographic hardness of permuted puzzle problems. Our contributions lie in three main directions:

1. Rigorous formalization. We formalize a notion of permuted puzzle distinguishing problems, extending and generalizing the proposed permuted puzzle framework of Boyle et al. (TCC'17).

2. Identifying hard permuted puzzles. We identify natural examples in which a one-time permutation provably creates cryptographic hardness, based on ``standard'' assumptions. In these examples, the original distributions $D_0,D_1$ are easily distinguishable, but the permuted puzzle distinguishing problem is computationally hard. We provide such constructions in the random oracle model, and in the plain model under the Decisional Diffie-Hellman (DDH) assumption. We additionally observe that the Learning Parity with Noise (LPN) assumption itself can be cast as a permuted puzzle.

3. Partial lower bound for the DE-PIR problem. We make progress towards better understanding the permuted puzzles underlying the DE-PIR constructions, by showing that a toy version of the problem, introduced by Boyle et al. (TCC'17), withstands a rich class of attacks, namely those that distinguish solely via statistical queries.

The conjectured hardness of specific instances of permuted puzzle problems was recently used to obtain the first candidate constructions of Doubly Efficient Private Information Retrieval (DE-PIR) (Boyle et al. & Canetti et al., TCC'17). Roughly, in these works the distributions $D_0,D_1$ over $F^n$ are evaluations of either a moderately low-degree polynomial or a random function. This new conjecture seems to be quite powerful, and is the foundation for the first DE-PIR candidates, almost two decades after the question was first posed by Beimel et al. (CRYPTO'00). While permuted puzzles are a natural and general class of problems, their hardness is still poorly understood.

We initiate a formal investigation of the cryptographic hardness of permuted puzzle problems. Our contributions lie in three main directions:

1. Rigorous formalization. We formalize a notion of permuted puzzle distinguishing problems, extending and generalizing the proposed permuted puzzle framework of Boyle et al. (TCC'17).

2. Identifying hard permuted puzzles. We identify natural examples in which a one-time permutation provably creates cryptographic hardness, based on ``standard'' assumptions. In these examples, the original distributions $D_0,D_1$ are easily distinguishable, but the permuted puzzle distinguishing problem is computationally hard. We provide such constructions in the random oracle model, and in the plain model under the Decisional Diffie-Hellman (DDH) assumption. We additionally observe that the Learning Parity with Noise (LPN) assumption itself can be cast as a permuted puzzle.

3. Partial lower bound for the DE-PIR problem. We make progress towards better understanding the permuted puzzles underlying the DE-PIR constructions, by showing that a toy version of the problem, introduced by Boyle et al. (TCC'17), withstands a rich class of attacks, namely those that distinguish solely via statistical queries.