IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
20 January 2022
Easwar Vivek Mangipudi, Udit Desai, Mohsen Minaei, Mainack Mondal, Aniket Kate
ePrint Report
The ever-increasing cohort of cryptocurrency users saw a sharp increase in different types of crypto-wallets in the past decade. However, different wallets are non-uniformly adopted in the population today; Specifically, emerging multi-device wallets, even with improved security and availability guarantees over their counterparts, are yet to receive proportionate attention and adoption. This work presents a data-driven investigation into the perceptions of cryptocurrency users towards multi-device wallets today, using a survey of255crypto-wallet users. Our results revealed two significant groups within our participants—Newbies and Non-newbies. These two groups statistically significantly differ in their usage of crypto-wallets. However, both of these groups were concerned with the possibility of their keys getting compromised and yet are unfamiliar with the guarantees offered by multi-device wallets. After educating the participants about the more secure multi-device wallets, around 70% of the participants preferred them; However, almost one-third of participants were still not comfortable using them. Our qualitative analysis revealed a gap between the actual security guarantees and mental models for these participants—they were afraid that using multi-device wallets will result in losing control over keys (and in effect funds) due to the distribution of key shares. We also investigated the preferred default settings for crypto-wallets across our participants, since multi-device wallets allow a wide range of key-share distribution settings. In the distributed server settings of the multi-device wallets, the participants preferred a smaller number of reputed servers (as opposed to a large non-reputed pool). Moreover, considerations about the threat model further affected their preferences, signifying a need for contextualizing default settings. We conclude the discussion by identifying concrete, actionable design avenues for future multi-device wallet developers to improve adoption.
Charlotte Bonte, Ilia Iliashenko, Jeongeun Park, Hilder V. L. Pereira, Nigel P. Smart
ePrint Report
The NTRU problem is a promising candidate to build efficient Fully Homomorphic Encryption (FHE). However, all the existing proposals (e.g. LTV, YASHE) need so-called `overstretched' parameters of NTRU to enable homomorphic operations. It was shown by Albrecht et al. (CRYPTO 2016) that these parameters are vulnerable against subfield lattice attacks.
Based on a recent, more detailed analysis of the overstretched NTRU assumption by Ducas and van Woerden (ASIACRYPT 2021), we construct two FHE schemes whose NTRU parameters lie outside the overstretched range. The first scheme is based solely on NTRU and demonstrates competitive performance against the state-of-the-art FHE schemes including TFHE. Our second scheme, which is based on both the NTRU and LWE assumptions, outperforms TFHE with a 28% faster bootstrapping and 45% smaller bootstrapping and key-switching keys.
Based on a recent, more detailed analysis of the overstretched NTRU assumption by Ducas and van Woerden (ASIACRYPT 2021), we construct two FHE schemes whose NTRU parameters lie outside the overstretched range. The first scheme is based solely on NTRU and demonstrates competitive performance against the state-of-the-art FHE schemes including TFHE. Our second scheme, which is based on both the NTRU and LWE assumptions, outperforms TFHE with a 28% faster bootstrapping and 45% smaller bootstrapping and key-switching keys.
Seiya Nuta, Jacob C. N. Schuldt, Takashi Nishide
ePrint Report
A forward-secure public-key encryption (PKE) scheme prevents eavesdroppers from decrypting past ciphertexts in order to mitigate the damage caused by a potential secret key compromise. In prior works, forward security in a non-interactive setting, such as forward-secure PKE, is achieved by constantly updating (secret) keys. In this paper, we formalize the notion of blockchain-based forward-secure PKE and show the feasibility of constructing a forward-secure PKE scheme without key update (i.e. both the public key and the secret key are immutable), assuming the existence of a proof-of-stake blockchain with the distinguishable forking property introduced by Goyal, et al. (TCC 2017). Our construction uses the proof-of-stake blockchain as an immutable decryption log and witness encryption by Garg, et al. (STOC 2013) to ensure that the same ciphertext cannot be decrypted twice, thereby rendering a compromised secret key useless with respect to decryption of past ciphertext the legitimate user has already decrypted.
Keita Emura
ePrint Report
Public-key encryption with keyword search (PEKS) does not provide trapdoor privacy, i.e., keyword information is leaked through trapdoors. To prevent this information leakage, public key authenticated encryption with keyword search (PAEKS) has been proposed, where a sender's secret key is required for encryption, and a trapdoor is associated with not only a keyword but also the sender. Liu et al. (ASIACCS 2022) proposed a generic construction of PAEKS based on word-independent smooth projective hash functions (SPHFs) and PEKS. In this paper, we propose a new generic construction of PAEKS. The basic construction methodology is the same as that of the Liu et al. construction, where each keyword is converted into an extended keyword using SPHFs, and PEKS is used for extended keywords. Nevertheless, our construction is more efficient than Liu et al.'s in the sense that we only use one SPHF, but Liu et al. used two SPHFs. In addition, for consistency we considered a security model that is stronger than Liu et al.'s. Briefly, Liu et al. considered only keywords even though a trapdoor is associated with not only a keyword but also a sender. Thus, a trapdoor associated with a sender should not work against ciphertexts generated by the secret key of another sender, even if the same keyword is associated. Our consistency definition considers a multi-sender setting and captures this case. In addition, for indistinguishability against chosen keyword attack (IND-CKA) and indistinguishability against inside keyword guessing attack (IND-IKGA), we use a stronger security model defined by Qin et al. (ProvSec 2021), where an adversary is allowed to query challenge keywords to the encryption and trapdoor oracles. We also highlight several issues associated with the Liu et al. construction in terms of hash functions, e.g., their construction does not satisfy the consistency that they claimed to hold.
Erik Aronesty, David Cash, Yevgeniy Dodis, Daniel H. Gallancy, Christopher Higley, Harish Karthikeyan, Oren Tysor
ePrint Report
We build the first sub-linear (in fact, potentially constant-time) public-key searchable encryption system:
− server can publish a public key $PK$.
− anybody can build an encrypted index for document $D$ under $PK$.
− client holding the index can obtain a token $z_w$ from the server to check if a keyword $w$ belongs to $D$.
− search using $z_w$ is almost as fast (e.g., sub-linear) as the non-private search.
− server granting the token does not learn anything about the document $D$, beyond the
keyword $w$.
− yet, the token $z_w$ is specific to the pair $(D, w)$: the client does not learn if other keywords $w'\neq w$ belong to $D$, or if w belongs to other, freshly indexed documents $D'$.
− server cannot fool the client by giving a wrong token $z_w$.
We call such a primitive Encapsulated Search Index (ESI). Our ESI scheme can be made $(t, n)$- distributed among $n$ servers in the best possible way: non-interactive, verifiable, and resilient to any coalition of up to $(t − 1)$ malicious servers. We also introduce the notion of delegatable ESI and show how to extend our construction to this setting.
Our solution — including public indexing, sub-linear search, delegation, and distributed token generation — is deployed as a commercial application by Atakama.
19 January 2022
University of Cape Town, Cape Town, South Africa
Job Posting
The South African Reserve Bank Research Chair in the Faculty of Commerce at the University of Cape Town is inviting applications for PhD students and Postdoctoral Research Fellows. The overarching theme of the fellowships is “the future of finance” and students and recent graduates with a background in economics, computer science, mathematics, finance, and related disciplines are invited to apply.
Any of the following research areas is of particular interest:
Central bank digital currencies
Financial interconnectedness
Cybersecurity
Successful applicants will work within a research group that strives for academic excellence and is interested in all aspects of central bank digital currencies (including financial stability, privacy, interoperability, and inclusivity), as well as all aspects of cybersecurity (in particular economic and network models of cybersecurity).
Applications open now and are considered on a rolling basis until 30 April or until the positions are filled. Appointments are for as soon as is feasible. The tenure of the Postdoctoral fellowship is for up to three years, while the tenure for PhD positions is usually three years, but can be extended for up to five years. Due to the ongoing pandemic, successful applicants can choose to work remotely in 2022.
Postdoctoral fellows receive a fellowship of R350,000 per annum and no benefits are included in the value of the fellowship. An additional travel allowance of R30,000 p.a. is available for successful applicants. The successful applicant will be required to comply with the University’s approved policies, procedures and practises for the postdoctoral sector.
PhD students receive a fellowship of R210,000 p.a. and an additional travel allowance of R20,000 p.a. in addition to access to university-wide funding for conference travel. We typically arrange for PhD students to spend at least one semester at a leading international university and help organize an internship at a leading international policy institution. PhD students are expected to have extremely strong quantitative skills.
Closing date for applications:
Contact: anda.ngcaba@uct.ac.za
More information: https://www.finhub.org.za/vacancies#research_team
Quantstamp Inc.
Job Posting
Leaders in Blockchain Security
Quantstamp's mission is to secure the decentralized internet, and has protected over $100B in digital asset risk from hackers. More than 200 startups, foundations and enterprises work with Quantstamp to keep their innovative products safe.
Auditing Engineer (Full-time/Part-time, Remote)
- Background in Computer Science or any related field such as Mathematics & Physics.
- Loves to find bugs in software systems and has a great eye for detail.
- Fluent English communication, both written and spoken.
Fluence in reading simple to medium complexity Solidity smart contracts.
Understands the most common security pitfalls with Solidity smart contracts.
Understands the basics of blockchain.
Responsibilities
Perform code reviews/audits of blockchain projects in small teams of engineers.
Interact with other team members to discuss the likelihood and impact of findings.
Write and review audit reports before they are shared with the customer.
Optional opportunities
Interact with customers to clarify technical requirements and to answer technical questions.
Perform research on a new topic in the crypto space and provide internal “Lunch and Learn” (LnL) sessions. There is an option to also record and publish LnLs on YouTube or other social media platforms.
Perks
Competitive compensation package (commensurate to experience) + performance and referral bonuses.
Free gym membership or any virtual alternative of your choice.
Rent your own desk in a co-working space or work from anywhere at any time.
100% remote and flexible working hours.
Learn about the hottest and newest products and trends in the crypto space before they appear on any news outlets.
Generous paid time off, including maternity/paternity leave.
Retirement/pension plan.
Join quarterly all-expenses paid retreats in exotic/exclusive locations with the whole team.
Closing date for applications:
Contact: Peter Slankis - Head of Talent Acquisition
More information: https://quantstamp.com/
Telecom Paris, Institut Polytechnique de Paris
Job Posting
The Cybersecurity-Cryptography team at Telecom Paris invites applications for a full professor position in Cybersecurity.
Application deadline: 18 April 2022.
Link to the call: https://institutminestelecom.recruitee.com/l/en/o/professeur-en-cybersecurite-securite-des-donnees-et-des-communications-dans-les-infrastructures-numeriques-a-telecom-paris-fh-cdi
Closing date for applications:
Contact: Please feel free to reach out to Hieu Phan (hieu.phan@telecom-paris.fr) or any member of the Cybersecurity-Cryptography team (https://www.telecom-paris.fr/C2) for more information.
18 January 2022
Marshall Ball, Dana Dachman-Soled, Julian Loss
ePrint Report
We present the first truly explicit constructions of non-malleable codes against tampering by bounded polynomial size circuits. These objects imply unproven circuit lower bounds and our construction is secure provided E requires exponential size nondeterministic circuits, an assumption from the derandomization literature.
Prior works on NMC for polysize circuits, either required an untamperable CRS [Cheraghchi, Guruswami ITCS'14; Faust, Mukherjee, Venturi, Wichs EUROCRYPT'14] or very strong cryptographic assumptions [Ball, Dachman-Soled, Kulkarni, Lin, Malkin EUROCRYPT'18; Dachman-Soled, Komargodski, Pass CRYPTO'21]. Both of works in the latter category only achieve non-malleability with respect to efficient distinguishers and, more importantly, utilize cryptographic objects for which no provably secure instantiations are known outside the random oracle model. In this sense, none of the prior yields fully explicit codes from non-heuristic assumptions. Our assumption is not known to imply the existence of one-way functions, which suggests that cryptography is unnecessary for non-malleability against this class.
Technically, security is shown by non-deterministically reducing polynomial size tampering to split-state tampering. The technique is general enough that it allows us to to construct the first seedless non-malleable extractors [Cheraghchi, Guruswami TCC'14] for sources sampled by polynomial size circuits [Trevisan, Vadhan FOCS'00] (resp. recognized by polynomial size circuits [Shaltiel CC'11]) and tampered by polynomial size circuits. Our construction is secure assuming E requires exponential size $\Sigma_4$-circuits (resp. $\Sigma_3$-circuits), this assumption is the state-of-the-art for extracting randomness from such sources (without non-malleability).
We additionally observe that non-malleable codes and non-malleable secret sharing [Goyal, Kumar STOC'18] are essentially equivalent with respect to polynomial size tampering. In more detail, assuming E is hard for exponential size nondeterministic circuits, any efficient secret sharing scheme can be made non-malleable against polynomial size circuit tampering.
Unfortunately, all of our constructions only achieve inverse polynomial (statistical) security. Extending a result from [Applebaum, Artemenko, Shaltiel, Yang CC'16] we show it is impossible to do better using black-box reductions. However, we extend the notion of relative error from [Applebaum, Artemenko, Shaltiel, Yang CC'16] to non-malleable extractors and show that they can be constructed from similar assumptions. We additionally observe that relative-error non-malleable extractors can be utilized to render a broad class of cryptographic primitives tamper and leakage resilient, while preserving negligible security guarantees.
Yevgeniy Dodis, Harish Karthikeyan, Daniel Wichs
ePrint Report
One of the ultimate goals of symmetric-key cryptography is to find a rigorous theoretical framework for building block ciphers from small components, such as cryptographic $S$-boxes, and then argue why iterating such small components for sufficiently many rounds would yield a secure construction. Unfortunately, a fundamental obstacle towards reaching
this goal comes from the fact that traditional security proofs cannot get security beyond $2^{-n}$, where $n$ is the size of the corresponding component.
As a result, prior provably secure approaches --- which we call "big-box cryptography" --- always made $n$ larger than the security parameter, which led to several problems: (a) the design was too coarse to really explain practical constructions, as (arguably) the most interesting design choices happening when instantiating such "big-boxes" were completely abstracted out; (b) the theoretically predicted number of rounds for the security of this approach was always dramatically smaller than in reality, where the "big-box" building block could not be made as ideal as required by the proof. For example, Even-Mansour (and, more generally, key-alternating) ciphers completely ignored the substitution-permutation network (SPN) paradigm which is at the heart of most real-world implementations of such ciphers.
In this work, we introduce a novel paradigm for justifying the security of existing block ciphers, which we call small-box cryptography. Unlike the "big-box" paradigm, it allows one to go much deeper inside the existing block cipher constructions, by only idealizing a small (and, hence, realistic!) building block of very small size $n$, such as an 8-to-32-bit $S$-box. It then introduces a clean and rigorous mixture of proofs and hardness conjectures which allow one to lift traditional, and seemingly meaningless, "at most $2^{-n}$ security proofs for reduced-round idealized variants of the existing block ciphers, into meaningful, full-round security justifications of the actual ciphers used in the real world.
We then apply our framework to the analysis of SPN ciphers (e.g, generalizations of AES), getting quite reasonable and plausible concrete hardness estimates for the resulting ciphers. We also apply our framework to the design of stream ciphers. Here, however, we focus on the simplicity of the resulting construction, for which we managed to find a direct "big-box"-style security justification, under a well studied and widely believed eXact Linear Parity with Noise (XLPN) assumption.
Overall, we hope that our work will initiate many follow-up results in the area of small-box cryptography.
As a result, prior provably secure approaches --- which we call "big-box cryptography" --- always made $n$ larger than the security parameter, which led to several problems: (a) the design was too coarse to really explain practical constructions, as (arguably) the most interesting design choices happening when instantiating such "big-boxes" were completely abstracted out; (b) the theoretically predicted number of rounds for the security of this approach was always dramatically smaller than in reality, where the "big-box" building block could not be made as ideal as required by the proof. For example, Even-Mansour (and, more generally, key-alternating) ciphers completely ignored the substitution-permutation network (SPN) paradigm which is at the heart of most real-world implementations of such ciphers.
In this work, we introduce a novel paradigm for justifying the security of existing block ciphers, which we call small-box cryptography. Unlike the "big-box" paradigm, it allows one to go much deeper inside the existing block cipher constructions, by only idealizing a small (and, hence, realistic!) building block of very small size $n$, such as an 8-to-32-bit $S$-box. It then introduces a clean and rigorous mixture of proofs and hardness conjectures which allow one to lift traditional, and seemingly meaningless, "at most $2^{-n}$ security proofs for reduced-round idealized variants of the existing block ciphers, into meaningful, full-round security justifications of the actual ciphers used in the real world.
We then apply our framework to the analysis of SPN ciphers (e.g, generalizations of AES), getting quite reasonable and plausible concrete hardness estimates for the resulting ciphers. We also apply our framework to the design of stream ciphers. Here, however, we focus on the simplicity of the resulting construction, for which we managed to find a direct "big-box"-style security justification, under a well studied and widely believed eXact Linear Parity with Noise (XLPN) assumption.
Overall, we hope that our work will initiate many follow-up results in the area of small-box cryptography.
Yevgeniy Dodis, Harish Karthikeyan, Daniel Wichs
ePrint Report
Forward security (FS) ensures that corrupting the current secret key in the system preserves the privacy or integrity of the prior usages of the system. Achieving forward security is especially hard in the setting of public-key encryption (PKE), where time is divided into periods, and in each period the receiver derives the next-period secret key from their current secret key, while the public key stays constant. Indeed, all current constructions of FS-PKE are built from hierarchical identity-based encryption (HIBE) and are rather complicated.
Motivated by applications to secure messaging, recent works of Jost et al. (Eurocrypt’19) and Alwen et al. (CRYPTO’20) consider a natural relaxation of FS-PKE, which they term updatable PKE (UPKE). In this setting, the transition to the next period can be initiated by any sender, who can compute a special update ciphertext. This ciphertext directly produces the next-period public key and can be processed by the receiver to compute the next-period secret key. If done honestly, future (regular) ciphertexts produced with the new public key can be decrypted with the new secret key, but past such ciphertexts cannot be decrypted with the new secret key. Moreover, this is true even if all other previous-period updates were initiated by untrusted senders.
Both papers also constructed a very simple UPKE scheme based on the CDH assumption in the random oracle model. However, they left open the question of building such schemes in the standard model, or based on other (e.g., post-quantum) assumptions, without using the heavy HIBE techniques. In this work, we construct two efficient UPKE schemes in the standard model, based on the DDH and LWE assumptions, respectively. Somewhat interestingly, our constructions gain their efficiency (compared to prior FS-PKE schemes) by using tools from the area of circular-secure and leakage resilient public-key encryption schemes (rather than HIBE).
Jakub Klemsa, Melek Önen
ePrint Report
Recent advances in Fully Homomorphic Encryption (FHE) allow for a practical evaluation of non-trivial functions over encrypted data. In particular, novel approaches for combining ciphertexts broadened the scope of prospective applications. However, for arithmetic circuits, the overall complexity grows with the desired precision and there is only a limited space for parallelization.
In this paper, we put forward several methods for fully parallel addition of multi-digit integers encrypted with the TFHE scheme. Since these methods handle integers in a special representation, we also revisit the signum function, firstly addressed by Bourse et al., and we propose a method for the maximum of two numbers; both with particular respect to parallelization. On top of that, we outline an approach for multiplication by a known integer.
According to our experiments, the fastest approach for parallel addition of 31-bit encrypted integers in an idealized setting with 32 threads is estimated to be more than 6x faster than the fastest sequential approach. Finally, we demonstrate our algorithms on an evaluation of a practical neural network.
In this paper, we put forward several methods for fully parallel addition of multi-digit integers encrypted with the TFHE scheme. Since these methods handle integers in a special representation, we also revisit the signum function, firstly addressed by Bourse et al., and we propose a method for the maximum of two numbers; both with particular respect to parallelization. On top of that, we outline an approach for multiplication by a known integer.
According to our experiments, the fastest approach for parallel addition of 31-bit encrypted integers in an idealized setting with 32 threads is estimated to be more than 6x faster than the fastest sequential approach. Finally, we demonstrate our algorithms on an evaluation of a practical neural network.
Anghel Florin, Asandoaiei David, Tabacaru Robert
ePrint Report
The study of randomness has always been a topic of significant relevance, and the importance of this topic in cryptography is undeniable. In this paper, we are going to provide a short introduction regarding pseudo-random number generators, their applications in cryptography and an analysis of the Discrete Fourier Transform statistical test.
Our contribution is that of compiling the results of multiple runs on several popular pseudo-random number generators, and a Python implementation for computing the probability of a type II error. We intend to underline the weak points of the Discrete Fourier Transform test by showcasing results on large amounts of data, and showcase how testing bigger sequences of bits can help reduce the probability of type II errors.
Nimrod Aviram, Benjamin Dowling, Ilan Komargodski, Kenneth G. Paterson, Eyal Ronen, Eylon Yogev
ePrint Report
The task of combining cryptographic keys, some of which may be maliciously formed, into one key, which is (pseudo)random is a central task in cryptographic systems. For example, it is a crucial component in the widely used TLS and Signal protocols. From an analytical standpoint, current security proofs model such key combiners as dual-PRFs -- a function which is a PRF when keyed by either of its two inputs -- guaranteeing pseudo-randomness if one of the keys is compromised or even maliciously chosen by an adversary.
However, in practice, implementations of key combiners significantly depart from the ``dual-PRF'' standard set by provable schemes. Specifically, existing implementations typically use heuristic approaches and rely on strong assumptions that are only known to be satisfied in ideal models (such as modelling underlying hash functions as random oracles), or which are not justified at all by known security results. We describe several cases of deployed protocols where this is the case, focussing on the use of HKDF as a dual-PRF. Unfortunately, such strong assumptions on cryptographic hash functions tend not to withstand the test of time, often leading to deployed systems that eventually become completely insecure; experience also shows that upgrading already-deployed cryptography from deprecated hash functions to newer ones is a slow process that can take many years. Finally, we consider it sub-optimal that the new hybrid key exchange protocols combining classical and post-quantum key exchanges and that are in the process of development risk more deeply embedding the improper use of key combiners.
In this work, we narrow the gap between theory and practice for key combiners. In particular, we give a construction of a dual-PRF that can be used as a drop-in replacement for current heuristic key combiners in a range of protocols. Our construction follows a theoretical construction by Bellare and Lysyanskaya, and is based on concrete hardness assumptions, phrased in the spirit of one-wayness. Therefore, our construction provides security unless extremely strong attacks against the underlying cryptographic hash function are discovered. Moreover, since these assumptions are considered post-quantum secure, our constructions can safely be used in new hybrid protocols. From a practical perspective, our dual-PRF construction is highly efficient, adding only a negligible overhead of a few microseconds compared to currently used (heuristic) approaches. We believe that our approach exemplifies a perfect middle-ground for practically efficient constructions that are supported by realistic hardness assumptions.
However, in practice, implementations of key combiners significantly depart from the ``dual-PRF'' standard set by provable schemes. Specifically, existing implementations typically use heuristic approaches and rely on strong assumptions that are only known to be satisfied in ideal models (such as modelling underlying hash functions as random oracles), or which are not justified at all by known security results. We describe several cases of deployed protocols where this is the case, focussing on the use of HKDF as a dual-PRF. Unfortunately, such strong assumptions on cryptographic hash functions tend not to withstand the test of time, often leading to deployed systems that eventually become completely insecure; experience also shows that upgrading already-deployed cryptography from deprecated hash functions to newer ones is a slow process that can take many years. Finally, we consider it sub-optimal that the new hybrid key exchange protocols combining classical and post-quantum key exchanges and that are in the process of development risk more deeply embedding the improper use of key combiners.
In this work, we narrow the gap between theory and practice for key combiners. In particular, we give a construction of a dual-PRF that can be used as a drop-in replacement for current heuristic key combiners in a range of protocols. Our construction follows a theoretical construction by Bellare and Lysyanskaya, and is based on concrete hardness assumptions, phrased in the spirit of one-wayness. Therefore, our construction provides security unless extremely strong attacks against the underlying cryptographic hash function are discovered. Moreover, since these assumptions are considered post-quantum secure, our constructions can safely be used in new hybrid protocols. From a practical perspective, our dual-PRF construction is highly efficient, adding only a negligible overhead of a few microseconds compared to currently used (heuristic) approaches. We believe that our approach exemplifies a perfect middle-ground for practically efficient constructions that are supported by realistic hardness assumptions.
Françoise Levy-dit-Vehel, Maxime Roméas
ePrint Report
Proofs of Retrievability (PoR) protocols ensure that a client can fully retrieve a large outsourced file from an untrusted server. Good PoRs should have low communication complexity, small storage overhead and clear security guarantees with tight security bounds. The focus of this work is to design good PoR schemes with simple security proofs. To this end, we use the Constructive Cryptography (CC) setting by Maurer [13]. We propose a framework for the design of secure and efficient PoR schemes based on Locally Correctable Codes (LCC). We give a first instantiation of our framework using the high rate lifted codes introduced by Guo et al. [5]. This yields an infinite family of good PoRs. We assert their security by solving a finite geometry problem, giving an explicit formula for the probability of an adversary to fool the client. Using the local correctability properties of Tanner codes, we get another instantiation of our framework and derive an analogous formula for the success probability of the audit.
Kang Yang, Xiao Wang
ePrint Report
In this paper, we study zero-knowledge (ZK) proofs for circuit satisfiability that can prove to n verifiers at a time efficiently. The proofs are secure against the collusion of a prover and a subset of $t$ verifiers. We refer to such ZK proofs as multi-verifier zero-knowledge (MVZK) proofs, and focus on the case that a majority of verifiers are honest (i.e., $t < n/2$). We construct efficient MVZK protocols where the prover sends one message to each verifier, while the verifiers only exchange one round of messages (or, to achieve information-theoretic security, the number of rounds between verifiers is logarithmic to the circuit size).
In addition to the round complexity, our MVZK protocols in the computational setting achieve particularly high practicality. These protocols are streamable and only require a memory proportional to what is needed to evaluate the circuit in the clear. In terms of communication cost, when $t < n/2$, the prover sends $1/2 + o(1)$ field elements per multiplication gate to every verifier. When the threshold of corrupted verifiers $t < n(1/2 − \epsilon)$ for any $0 < \epsilon < 1/2$, we can further reduce the communication to $O(1/n)$ field elements per multiplication gate per verifier.
In addition to the round complexity, our MVZK protocols in the computational setting achieve particularly high practicality. These protocols are streamable and only require a memory proportional to what is needed to evaluate the circuit in the clear. In terms of communication cost, when $t < n/2$, the prover sends $1/2 + o(1)$ field elements per multiplication gate to every verifier. When the threshold of corrupted verifiers $t < n(1/2 − \epsilon)$ for any $0 < \epsilon < 1/2$, we can further reduce the communication to $O(1/n)$ field elements per multiplication gate per verifier.
Daniel Escudero
ePrint Report
This text serves as a general guide to secure multiparty computation based on secret-sharing, focusing more on practical aspects of the techniques and constructions rather than their theoretical grounds. It is intended to serve as a
This work in progress currently includes an introduction to several core concepts in secure multiparty computation, an overview of simulation-based security, and detailed constructions for honest and two-thirds honest majority MPC, and also dishonest majority in the preprocessing model.
This work in progress currently includes an introduction to several core concepts in secure multiparty computation, an overview of simulation-based security, and detailed constructions for honest and two-thirds honest majority MPC, and also dishonest majority in the preprocessing model.
Nicu Neculache, Vlad-Andrei Petcu, Emil Simion
ePrint Report
Statistical testing is a mechanism that has been included in various domains or fields, providing a method for making quantitative decisions about a particular sample. The statistical testing plays a big role in selecting and testing random and pseudorandom generators whose output may be used in the field of cryptography, specifically for the encryption, decryption and the keys or sub-keys generation. In this paper we study one of the NIST 800-22 random number generation tests. We give an overview for the statistical testing and its importance for cryptography, then we focus on one of the tests, specifically the Binary Matrix Rank Test. We provide a logical schema and a new code implementation in Python 3. Further we evaluate the test, by running it on a collection of well chosen test samples and gathering the results based on which we do an assumption. More exactly, we validate if the binary sequence input can be classified as random or not depending on the bits density.
Paul Frixons, María Naya-Plasencia, André Schrottenloher
ePrint Report
In this paper, we study quantum key-recovery attacks on block ciphers. While it is well known that a quantum adversary can generically speed up an exhaustive search of the key, much less is known on how to use specific vulnerabilities of the cipher to accelerate this procedure. In this context, we show how to convert classical boomerang and mixing boomerang attacks into efficient quantum key-recovery attacks. In some cases, we can even obtain a quadratic speedup, the same as simple differential attacks. We apply this technique to a 5-round attack on SAFER++.
Kaiyi Zhang, Hongrui Cui, Yu Yu
ePrint Report
Hash-based signatures offer a conservative alternative to post-quantum signatures with arguably better-understood security than other post-quantum candidates. Nevertheless, a major drawback that makes it less favorable to deploy in practice is the (relatively) large size of the signatures, and long signing and verification time.
In this paper, we introduce SPHINCS-$\alpha$, a stateless hash-based signature scheme, which benefits from a twofold improvement. First, we provide an improved Winternitz one-time signature with an efficient size-optimal encoding, which might be of independent interest. Second, we give a variant of the few-time signature scheme, FORC, by applying the Winternitz method. Plugging the two improved components into the framework of the state-of-the-art (stateless) hash-based SPHINCS$^+$, with carefully chosen parameter choices, yields a certain degree of performance improvement. In particular, under the ``small'' series parameter set aiming for compact signatures, our scheme reduces signature size and signing time by 8-11% and 3-15% respectively, compared to SPHINCS$^+$ at all security levels. For the ``fast'' series that prioritizes computation time, our scheme exhibits a better performance in general. E.g., when instantiating the simple tweakable hash function with SHA-256, our scheme reduces the signing and verification time by 7-10% and up to 10% respectively, while keeping roughly the same signature size. The security proofs/estimates follow the framework of SPHINCS$^+$. To facilitate a fair comparison, we give the implementation of SPHINCS-$\alpha$ by adapting that of SPHINCS$^+$, and we provide a theoretical estimate in the number of hash function calls.
In this paper, we introduce SPHINCS-$\alpha$, a stateless hash-based signature scheme, which benefits from a twofold improvement. First, we provide an improved Winternitz one-time signature with an efficient size-optimal encoding, which might be of independent interest. Second, we give a variant of the few-time signature scheme, FORC, by applying the Winternitz method. Plugging the two improved components into the framework of the state-of-the-art (stateless) hash-based SPHINCS$^+$, with carefully chosen parameter choices, yields a certain degree of performance improvement. In particular, under the ``small'' series parameter set aiming for compact signatures, our scheme reduces signature size and signing time by 8-11% and 3-15% respectively, compared to SPHINCS$^+$ at all security levels. For the ``fast'' series that prioritizes computation time, our scheme exhibits a better performance in general. E.g., when instantiating the simple tweakable hash function with SHA-256, our scheme reduces the signing and verification time by 7-10% and up to 10% respectively, while keeping roughly the same signature size. The security proofs/estimates follow the framework of SPHINCS$^+$. To facilitate a fair comparison, we give the implementation of SPHINCS-$\alpha$ by adapting that of SPHINCS$^+$, and we provide a theoretical estimate in the number of hash function calls.