International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

02 November 2024

Computer Science Department, University of Oxford
Job Posting Job Posting
Oxford University’s Computer Science Department is hiring four new faculty members. Most of the positions are open to all areas of computer science. The closing date for the Associate Professor and Professor of Computer Science positions is 12 noon on 18 December 2024. There is also a Chair in Computer Science and a joint faculty position with philosophy in AI ethics. For more information, see here: https://www.cs.ox.ac.uk/aboutus/vacancies/vacancy-faculty-hiring.html

Closing date for applications:

Contact: Jo Francis, Computer Science Department, Oxford Univeristy.

More information: https://www.cs.ox.ac.uk/aboutus/vacancies/vacancy-faculty-hiring.html

Expand

01 November 2024

Valerio Cini, Hoeteck Wee
ePrint Report ePrint Report
e present a key-policy attribute-based encryption (ABE) scheme for circuits based on the Learning With Errors (LWE) assumption whose key size is independent of the circuit depth. Our result constitutes the first improvement for ABE for circuits from LWE in almost a decade, given by Gorbunov, Vaikuntanathan, and Wee (STOC 2013) and Boneh, et al. (EUROCRYPT 2014) -- we reduce the key size in the latter from $\mathsf{poly}(\mbox{depth},\lambda)$ to $\mathsf{poly}(\lambda)$. The starting point of our construction is a recent ABE scheme of Li, Lin, and Luo (TCC 2022), which achieves $\mathsf{poly}(\lambda)$ key size but requires pairings and generic bilinear groups in addition to LWE; we introduce new lattice techniques to eliminate the additional requirements.
Expand
Ahmad Khoureich Ka
ePrint Report ePrint Report
The enormous potential of Attribute-Based Encryption (ABE) in the context of IoT has driven researchers to propose pairing-free ABE schemes that are suitable for resource-constrained devices. Unfortunately, many of these schemes turned out to be insecure. This fact seems to reinforce the point of view of some authors according to which instantiating an Identity-Based Encryption (IBE) in plain Decision Diffie-Hellman (DDH) groups is impossible. In this paper, we provide a generic AND gate access structured Ciphertext-Policy ABE (CP-ABE) scheme with secret access policy from Inner-Product Functional Encryption (IPFE). We also propose an instantiation of that generic CP-ABE scheme from the DDH assumption. From our generic CP-ABE scheme we derive an IBE scheme by introducing the concept of Clustered Identity-Based Encryption (CIBE). Our schemes show that it is indeed possible to construct practical and secure IBE and ABE schemes based on the classical DDH assumption.
Expand
Yuyin Yu, Jingchen Li, Nadiia Ichanska, Nikolay Kaleyski
ePrint Report ePrint Report
Yu et al. described an algorithm for conducting computational searches for quadratic APN functions over the finite field $\mathbb{F}_{2^n}$, and used this algorithm to give a classification of all quadratic APN functions with coefficients in $\mathbb{F}_{2}$ for dimensions $n$ up to 9. In this paper, we speed up the running time of that algorithm by a factor of approximately $\frac{n \times 2^n}{n^3}$. Based on this result, we give a complete classification of all quadratic APN functions over $\mathbb{F}_{2^{10}}$ with coefficients in $\mathbb{F}_{2}$. We also perform some partial computations for quadratic APN functions over $\mathbb{F}_{2^{11}}$ with coefficients in $\mathbb{F}_{2}$ , and conjecture that they form 6 CCZ-inequivalent classes which also correspond to known APN functions.
Expand
Quinten Norga, Suparna Kundu, Uttam Kumar Ojha, Anindya Ganguly, Angshuman Karmakar, Ingrid Verbauwhede
ePrint Report ePrint Report
Digital signature schemes based on multivariate- and code-based hard problems are promising alternatives for lattice-based signature schemes, due to their smaller signature size. Hence, several candidates in the ongoing additional standardization for quantum secure digital signature (DS) schemes by the National Institute of Standards and Technology (NIST) rely on such alternate hard problems. Gaussian Elimination (GE) is a critical component in the signing procedure of these schemes. In this paper, we provide a masking scheme for GE with back substitution to defend against first- and higher-order attacks. To the best of our knowledge, this work is the first to analyze and propose masking techniques for multivariate- or code-based DS algorithms. We propose a masked algorithm for transforming a system of linear equations into row-echelon form. This is realized by introducing techniques for efficiently making leading (pivot) elements one while avoiding costly conversions between Boolean and multiplicative masking at all orders. We also propose a technique for efficient masked back substitution, which eventually enables a secure unmasking of the public output. We evaluate the overhead of our countermeasure for several post-quantum candidates and their different security levels at first-, second-, and third-order, including UOV, MAYO, SNOVA, QR-UOV, and MQ-Sign. Notably, the operational cost of first-, second-, and third-order masked GE is 2.3$\times$ higher, and the randomness cost is 1.2$\times$ higher in MAYO compared to UOV for security levels III and V. In contrast, these costs are similar in UOV and MAYO for one version of level I. We also show detailed performance results for masked GE implementations for all three security versions of UOV on the Arm Cortex-M4 and compare them with unmasked results. Our first-order implementations targeting UOV parameters have overheads of factor 6.5$\times$, 5.9$\times$, and 5.7$\times$ compared to the unprotected implementation for NIST security level I, III, and V.
Expand
Ryo Ohashi, Hiroshi Onuki
ePrint Report ePrint Report
In 2020, Castryck-Decru-Smith constructed a hash function, using the (2,2)-isogeny graph of superspecial principally polarized abelian surfaces. In their construction, the initial surface was chosen from vertices very "close" to the square of a supersingular elliptic curve with a known endomorphism ring. In this paper, we introduce an algorithm for detecting a collision on their hash function. Under some heuristic assumptions, the time complexity and space complexity of our algorithm are estimated to be $\widetilde{O}(p^{3/10})$ which are smaller than the complexity $\widetilde{O}(p^{3/2})$ the authors claimed to be necessary to detect a collision, where $p$ is the characteristic of the base field. In particular case where $p$ has a special form, then both the time and space complexities of our algorithm are polynomial in $\log{p}$. We implemented our algorithm in Magma, and succeeded in detecting a collision in 17 hours (using 64 parallel computations) under a parameter setting which the authors had claimed to be 384-bit secure.
Expand
Seungwoo Kim, Semin Han, Seongho Park, Kyeongtae Lee, Jihye Kim, Hyunok Oh
ePrint Report ePrint Report
In this paper, we introduce zkMarket, a privacy-preserving fair trade system on the blockchain. zkMarket addresses the challenges of transaction privacy and computational efficiency. To ensure transaction privacy, zkMarket is built upon an anonymous transfer protocol. By combining encryption with zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), both the seller and the buyer are enabled to trade fairly. Furthermore, by encrypting the decryption key, we make the data registration process more concise and improve the seller's proving time by leveraging commit-and-prove SNARK (CP-SNARK) and our novel pseudorandom generator, the matrix-formed PRG (MatPRG).

Our evaluation demonstrates that zkMarket significantly reduces the computational overhead associated with traditional blockchain solutions while maintaining robust security and privacy. The seller can register 1MB of data in 3.2 seconds, while the buyer can generate the trade transaction in 0.2 seconds, and the seller can finalize the trade in 0.4 seconds.
Expand
Jingyu Li, Zhicong Huang, Min Zhang, Jian Liu, Cheng Hong, Tao Wei, Wenguang Chen
ePrint Report ePrint Report
Approximate nearest neighbor search (ANNS), also known as vector search, is an important building block for varies applications, such as databases, biometrics, and machine learning. In this work, we are interested in the private ANNS problem, where the client wants to learn (and can only learn) the ANNS results without revealing the query to the server. Previous private ANNS works either suffers from high communication cost (Chen et al., USENIX Security 2020) or works under a weaker security assumption of two non-colluding servers (Servan-Schreiber et al., SP 2022). We present Panther, an efficient private ANNS framework under the single server setting. Panther achieves its high performance via several novel co-designs of private information retrieval (PIR), secretsharing, garbled circuits, and homomorphic encryption. We made extensive experiments using Panther on four public datasets, results show that Panther could answer an ANNS query on 10 million points in 23 seconds with 318 MB of communication. This is more than 6× faster and 18× more compact than Chen et al..
Expand
Michele Ciampi, Xiangyu Liu, Ioannis Tzannetos, Vassilis Zikas
ePrint Report ePrint Report
Adaptor signatures (AS) extend the functionality of traditional digital signatures by enabling the generation of a pre-signature tied to an instance of a hard NP relation, which can later be turned (adapted) into a full signature upon revealing a corresponding witness. The recent work by Liu et al. [ASIACRYPT 2024] devised a generic AS scheme that can be used for any NP relation---which here we will refer to as universal adaptor signatures scheme, in short UAS---from any one-way function. However, this generic construction depends on the Karp reduction to the Hamiltonian cycle problem, which adds significant overhead and hinders practical applicability.

In this work, we present an alternative approach to construct universal adaptor signature schemes relying on the multi-party computation in the head (MPCitH) paradigm. This overcomes the reliance on the costly Karp reduction, while inheriting the core property of the MPCitH---which makes it an invaluable tool in efficient cryptographic protocols---namely, that the construction is black-box with respect to the underlying cryptographic primitive (while it remains non-black-box in the relation being proven). Our framework simplifies the design of UAS and enhances their applicability across a wide range of decentralized applications, such as blockchain and privacy-preserving systems. Our results demonstrate that MPCitH-based UAS schemes offer strong security guarantees while making them a promising tool in the design of real-world cryptographic protocols.
Expand
Sunyeop Kim, Insung Kim, Dongjae Lee, Deukjo Hong, Jaechul Sung, Seokhie Hong
ePrint Report ePrint Report
ARADI is a low-latency block cipher proposed by the NSA (National Security Agency) in 2024 for memory encryption. Bellini et al. experimentally demonstrated that in specific cubes of 5-round ARADI, the cube sums are byte-wise equal, for example, to 0x9d9dc5c5. This paper modifies the MILP-based division property algorithm to prove this and observes that the rotation amount of 8 in ARADI causes cancellations of monomials, allowing us to extend the byte-wise equal property up to 8 rounds. As a result, we obtained distinguishers for rounds 6 and 7 with lower data complexities of $2^{77}$ and $2^{112}$, respectively, compared to previous methods.
Expand

30 October 2024

Laasya Bangalore, Albert Cheu, Muthuramakrishnan Venkitasubramaniam
ePrint Report ePrint Report
Distributed mean estimation (DME) is a fundamental and important task as it serves as a subroutine in convex optimization, aggregate statistics, and, more generally, federated learning. The inputs for distributed mean estimation (DME) are provided by clients (such as mobile devices), and these inputs often contain sensitive information. Thus, protecting privacy and mitigating the influence of malicious adversaries are critical concerns in DME. A surge of recent works has focused on building multiparty computation (MPC) based protocols tailored for the task of secure aggregation. However, MPC fails to directly address these two issues: (i) the potential manipulation of input by adversaries, and (ii) the leakage of information from the underlying function. This paper presents a novel approach that addresses both these issues. We propose a secure aggregation protocol with a robustness guarantee, effectively protecting the system from "faulty" inputs introduced by malicious clients. Our protocol further ensures differential privacy, so that the underlying function will not leak significant information about individuals. Notably, this work represents the first comprehensive effort to combine robustness and differential privacy guarantees in the context of DME. In particular, we capture the security of the protocol via a notion of "usefulness" combined with differential privacy inspired by the work of Mironov et al. (CRYPTO 2009) and formally analyze this security guarantee for our protocol.
Expand
Daniel Cabarcas, Peigen Li, Javier Verbel, Ricardo Villanueva-Polanco
ePrint Report ePrint Report
SNOVA is a post-quantum digital signature scheme based on multivariate polynomials. It is a first-round candidate in an ongoing NIST standardization process for post-quantum signatures, where it stands out for its efficiency and compactness. Since its initial submission, there have been several improvements to its security analysis, both on key recovery and forgery attacks. All these works reduce to solving a structured system of quadratic polynomials, which we refer to as SNOVA system.

In this work, we propose a polynomial solving algorithm tailored for SNOVA systems, which exploits the stability of the system under the action of a commutative group of matrices. This new algorithm reduces the complexity to solve SNOVA systems, over generic ones. We show how to adapt the reconciliation and direct attacks in order to profit from the new algorithm. Consequently, we improve the reconciliation attack for all SNOVA parameter sets with speedup factors ranging between $2^3$ and $2^{22}$. Our algorithm also reduces the complexity of the direct attack for several parameter sets. It is particularly effective for the parameters that give the best performance to SNOVA $(l=4)$, and which were not taken below NIST's security threshold by previous attacks. Our attack brings these parameter sets $(l=4)$ below that threshold with speedup factors between $2^{33}$ and $2^{52}$, over the state-of-the-art.
Expand
Phillip Gajland, Jonas Janneck, Eike Kiltz
ePrint Report ePrint Report
Falcon is a winner of NIST's six-year post-quantum cryptography standardisation competition. Based on the celebrated full-domain-hash framework of Gentry, Peikert and Vaikuntanathan (GPV) (STOC'08), Falcon leverages NTRU lattices to achieve the most compact signatures among lattice-based schemes.

Its security hinges on a Rényi divergence-based argument for Gaussian samplers, a core element of the scheme. However, the GPV proof, which uses statistical distance to argue closeness of distributions, fails when applied naively to Falcon due to parameter choices resulting in statistical distances as large as $2^{-34}$. Additional implementation-driven deviations from the GPV framework further invalidate the original proof, leaving Falcon without a security proof despite its selection for standardisation.

This work takes a closer look at Falcon and demonstrates that introducing a few minor, conservative modifications allows for the first formal proof of the scheme in the random oracle model. At the heart of our analysis lies an adaptation of the GPV framework to work with the Rényi divergence, along with an optimised method for parameter selection under this measure. Furthermore, we obtain a provable version of the GPV framework over NTRU rings. Both these tools may be of independent interest.

Unfortunately, our analysis shows that despite our modification of Falcon-512 and Falcon-1024 we do not achieve strong unforgeability for either scheme. For plain unforgeability we are able to show that our modifications to Falcon-512 barely satisfy the claimed 120-bit security target and for Falcon-1024 we confirm the claimed security level. As such we recommend revisiting falcon and its parameters.
Expand
Hanzhi Liu, Jingyu Ke, Hongbo Wen, Robin Linus, Lukas George, Manish Bista, Hakan Karakuş, Domo, Junrui Liu, Yanju Chen, Yu Feng
ePrint Report ePrint Report
Bitcoin, while being the most prominent blockchain with the largest market capitalization, suffers from scalability and throughput limitations that impede the development of ecosystem projects like Bitcoin Decentralized Finance (BTCFi). Recent advancements in BitVM propose a promising Layer 2 (L2) solution to enhance Bitcoin's scalability by enabling complex computations off-chain with on-chain verification. However, Bitcoin's constrained programming environment—characterized by its non-Turing-complete Script language lacking loops and recursion, and strict block size limits—makes developing complex applications labor-intensive, error-prone, and necessitates manual partitioning of scripts. Under this complex programming model, subtle mistakes could lead to irreversible damage in a trustless environment like Bitcoin. Ensuring the correctness and security of such programs becomes paramount.

To address these challenges, we introduce the first formal verification tool for BitVM implementations. Our approach involves designing a register-based, higher-level domain-specific language (DSL) that abstracts away complex stack operations, allowing developers to reason about program correctness more effectively while preserving the semantics of the original Bitcoin Script. We present a formal computational model capturing the semantics of BitVM execution and Bitcoin Script, providing a foundation for rigorous verification. To efficiently handle large programs and complex constraints arising from unrolled computations that simulate loops, we summarize repetitive "loop-style" computations using loop invariant predicates in our DSL. We leverage a counterexample-guided inductive synthesis (CEGIS) procedure to lift low-level Bitcoin Script into our DSL, facilitating efficient verification without sacrificing accuracy. Evaluated on 98 benchmarks from BitVM's SNARK verifier, our tool successfully verifies 94% of cases within seconds, demonstrating its effectiveness in enhancing the security and reliability of BitVM.
Expand
Dedy Septono Catur Putranto, Rini Wisnu Wardhani, Jaehan Cho, Howon Kim
ePrint Report ePrint Report
Elliptic Curve Point Multiplication (ECPM) is a key component of the Elliptic Curve Cryptography (ECC) hierarchy protocol. However, the specific estimation of resources required for this process remains underexplored despite its significance in the cryptanalysis of ECC algorithms, particularly binary ECC in GF (2?). Given the extensive use of ECC algorithms in various security protocols and devices, it is essential to conduct this examination to gain valuable insights into its cryptanalysis, specifically in terms of providing precise resource estimations, which serve as a solid basis for further investigation in solving the Elliptic Curve Discrete Logarithm Problem. Expanding on several significant prior research, in this work, we refer to as ECPM cryptanalysis, we estimate quantum resources, including qubits, gates, and circuit depth, by integrating point addition (PA) and point-doubling (PD) into the ECPM scheme, culminating in a Shor’s algorithm-based binary ECC cryptanalysis circuit. Focusing on optimizing depth, we elaborate on and implement the most efficient PD circuit and incorporate optimized Karatsuba multiplication and FLT-based inversion algorithms for PA and PD operations. Compared to the latest PA-only circuits, our preliminary results showcase significant resource optimization for various ECPM implementations, including single-step ECPM, ECPM with combined or selective PA/PD utilization, and total−step ECPM (2? PD+2 PA).
Expand
Masayuki Abe, David Balbás, Dung Bui, Miyako Ohkubo, Zehua Shang, Mehdi Tibouchi
ePrint Report ePrint Report
In many multi-round public-coin interactive proof systems, challenges in different rounds serve different roles, but a formulation that actively utilizes this aspect has not been studied extensively. In this paper, we propose new notions called critical-round special honest verifier zero-knowledge and critical-round special soundness. Our notions are simple, intuitive, easy to apply, and capture several practical multi-round proof protocols including, but not limited to, those from the MPC-in-the-Head paradigm.

We demonstrate the usefulness of these notions with two fundamental applications where three-round protocols are known to be useful, but multi-round ones generally fail. First, we show that critical-round proofs yield trapdoor commitment schemes. This result also enables the instantiation of post-quantum secure adaptor signatures and threshold ring signatures from MPCitH, resolving open questions in (Haque and Scafuro, PKC 2020) and in (Liu et al., ASIACRYPT 2024). Second, we show that critical-round proofs can be securely composed using the Cramer-Schoenmakers-Damgård method. This solves an open question posed by Abe et al. in CRYPTO 2024.

Overall, these results shed new light on the potential of multi-round proofs in both theoretical and practical cryptographic protocol design
Expand
Toi Tomita, Junji Shikata
ePrint Report ePrint Report
In this paper, we propose an efficient identity-based encryption (IBE) scheme based on the ring learning with errors (RLWE) assumption in the (quantum) random oracle model. Our IBE scheme is (asymptotically) as efficient as the most practical lattice-based IBE scheme proposed by Ducal et al. (ASIACRYPT 2014). Furthermore, our scheme is adaptively and anonymously secure, and its security reduction is tight. We design our IBE scheme by instantiating the framework of Gentry et al. (STOC 2008) using the compact preimage sampling proposed by Yu et al. (CRYPTO 2023). The tightness of our IBE scheme is obtained by combining the proof technique of Katsumata et al. (ASIACRYPT 2018) with the results for ideal lattices developed by Mera et al. (PKC 2022).
Expand
Mi-Ying (Miryam) Huang, Baiyu Li, Xinyu Mao, Jiapeng Zhang
ePrint Report ePrint Report
We present an efficient Publicly Verifiable Fully Homomorphic Encryption scheme that, along with being able to evaluate arbitrary boolean circuits over ciphertexts, also generates a succinct proof of correct homomorphic computation. Our scheme is based on FHEW proposed by Ducas and Micciancio (Eurocrypt'15), and we incorporate the GINX homomorphic accumulator (Eurocrypt'16) for improved bootstrapping efficiency. In order to generate the proof efficiently, we generalize the widely used Rank-1 Constraint System (R1CS) to the ring setting and obtain Ring R1CS, to natively express homomorphic computation in FHEW. In particular, we develop techniques to efficiently express in our Ring R1CS the "non-arithmetic" operations, such as gadget decomposition and modulus switching used in the FHEW construction. We further construct a SNARG for Ring R1CS instances, by translating the Ring R1CS instance into a sum-check protocol over polynomials, and then compiling it into a succinct non-interactive proof by incorporating the lattice-based polynomial commitment scheme of Cini, Malavolta, Nguyen, and Wee (Crypto'24). Putting together, our Publicly Verifiable FHE scheme relies on standard hardness assumptions about lattice problems such that it generates a succinct proof of homomorphic computation of circuit $C$ in time $O(|C|^2\cdot poly(\lambda))$ and of size $O(\log^2{|C|}\cdot poly(\lambda))$. Besides, our scheme achieves the recently proposed IND-SA (indistinguishability under semi-active attack) security by Walter (EPrint 2024/1207) that exactly captures client data privacy when a homomorphic computation can be verified.
Expand
Gorjan Alagic, Dana Dachman-Soled, Manasi Shingane, Patrick Struck
ePrint Report ePrint Report
In their seminal work, Gentry and Wichs (STOC'11) established an impossibility result for the task of constructing an adaptively-sound SNARG via black-box reduction from a falsifiable assumption. An exciting set of recent SNARG constructions demonstrated that, if one adopts a weaker but still quite meaningful notion of adaptive soundness, then impossibility no longer holds (Waters-Wu, Waters-Zhandry, Mathialagan-Peters-Vaikunthanathan ePrint'24). These fascinating new results raise an intriguing possibility: is there a way to remove this slight weakening of adaptive soundness, thereby completely circumventing the Gentry-Wichs impossibility? A natural route to closing this gap would be to use a quantum black-box reduction, i.e., a reduction that can query the SNARG adversary on superpositions of inputs. This would take advantage of the fact that Gentry-Wichs only consider classical reductions. In this work, we show that this approach cannot succeed. Specifically, we extend the Gentry-Wichs impossibility result to quantum black-box reductions, and thereby establish an important limit on the power of such reductions.
Expand
Jingwei Chen, Linhan Yang, Wenyuan Wu, Yang Liu, Yong Feng
ePrint Report ePrint Report
Homomorphically encrypted matrix operations are extensively used in various privacy-preserving applications. Consequently, reducing the cost of encrypted matrix operations is a crucial topic on which numerous studies have been conducted. In this paper, we introduce a novel matrix encoding method, named bicyclic encoding, under which we propose two new algorithms BMM-I and BMM-II for encrypted matrix multiplication. BMM-II outperforms the stat-of-the-art algorithms in theory, while BMM-I, combined with the segmented strategy, performs well in practice, particularly for matrices with high dimensions. Another noteworthy advantage of bicyclic encoding is that it allows for transposing an encrypted matrix entirely free. A comprehensive experimental study based on our proof-of-concept implementation shows that each algorithm introduced in this paper has specific scenarios outperforming existing algorithms, achieving speedups ranging from 2x to 38x.
Expand
◄ Previous Next ►