06 March 2025
University of Edinburgh, UK
Applications are invited for a Lecturer (Assistant Professor) position in Cyber Security & Privacy to join the School of Informatics at the University of Edinburgh. The post is based within the School’s Cyber Security, Privacy and Trust group, which is a broad group of researchers whose expertise ranges from cryptography and formal verification to human factors and social aspects.
This position is part of a broader collaborative research initiative on privacy-enhanced secure computations at scale, in partnership with Input-Output Global (IOG). Applications are invited from candidates with research expertise in all areas of Cyber Security & Privacy, with special consideration given to those specializing in blockchain and distributed ledgers, multi-party computation, post-quantum cryptography, and data privacy.
We are seeking current and future leaders in the field. The successful candidates will have (or be near to completing) a PhD, outstanding track record of research, evidence of growing reputation, a committed vision and research agenda, the enthusiasm and ability to undertake original research, to manage a research group, and to engage with teaching and academic supervision.
The School of Informatics at the University of Edinburgh is one of the largest in Europe, with more than 120 academic staff and a total of over 500 post-doctoral researchers, research students and support staff. Informatics at Edinburgh rated highest on Research Power in the most recent Research Excellence Framework. The School has strong links with industry, with dedicated business incubator space and well-established enterprise and business development programmes. The University of Edinburgh has recently established the Bayes Centre for Data Science and Artificial Intelligence, which provides a locus for fruitful multi-disciplinary work, including a range of companies collocated in it. The School holds a Silver Athena SWAN award in recognition of our commitment to advance the representation of women in science, mathematics, engineering and technology. We are also Stonewall Scotland Diversity Champions actively promoting LGBT equality.
This position is part of a broader collaborative research initiative on privacy-enhanced secure computations at scale, in partnership with Input-Output Global (IOG). Applications are invited from candidates with research expertise in all areas of Cyber Security & Privacy, with special consideration given to those specializing in blockchain and distributed ledgers, multi-party computation, post-quantum cryptography, and data privacy.
We are seeking current and future leaders in the field. The successful candidates will have (or be near to completing) a PhD, outstanding track record of research, evidence of growing reputation, a committed vision and research agenda, the enthusiasm and ability to undertake original research, to manage a research group, and to engage with teaching and academic supervision.
The School of Informatics at the University of Edinburgh is one of the largest in Europe, with more than 120 academic staff and a total of over 500 post-doctoral researchers, research students and support staff. Informatics at Edinburgh rated highest on Research Power in the most recent Research Excellence Framework. The School has strong links with industry, with dedicated business incubator space and well-established enterprise and business development programmes. The University of Edinburgh has recently established the Bayes Centre for Data Science and Artificial Intelligence, which provides a locus for fruitful multi-disciplinary work, including a range of companies collocated in it. The School holds a Silver Athena SWAN award in recognition of our commitment to advance the representation of women in science, mathematics, engineering and technology. We are also Stonewall Scotland Diversity Champions actively promoting LGBT equality.
Closing date for applications:
Contact: Prof. Aggelos Kiayias
More information: https://elxw.fa.em3.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1001/job/12182/?utm_medium=jobshare&utm_source=External+Job+Share
05 March 2025
Nitin Kumar Sharma, Sabyasachi Dey, Santanu Sarkar, Subhamoy Maitra
In this paper, we analyze the subtle issues of complexity estimates related to state-of-the-art cryptanalytic efforts on ChaCha. In this regard, we demonstrate that the currently best-known cryptanalytic result on $7$-round ChaCha with time $2^{189.7}$ and data $2^{102.63}$ [Xu et al., ToSC 2024] can be estimated as $2^{178.12}$ for time and $2^{101.09}$ for data complexity. We improve the best-known result for the $7.25$ round by obtaining an improved set of Probabilistic Neutral Bits and considering our revised estimation. Our result with time complexity $2^{212.43}$ and data complexity $2^{100.56}$ improves the result of Xu et al., where they could achieve time and data complexity $2^{223.9}$ and $2^{100.80}$, respectively. For both the $7$ and $7.25$ rounds, we can show an improvement of the order of $2^{11}$ in the time complexity. For $7.5$-round, we improve the result of Dey [IEEE-IT 2024], which reports the time and data complexity of $2^{255.24}$ and $2^{32.64}$, respectively. By applying the formula of the same paper and incorporating additional PNBs, we obtain improved time and data complexity of $2^{253.23}$ and $2^{34.47}$, respectively. Thus, this paper describes the currently best-known cryptanalytic results against reduced round ChaCha. Our results do not affect the security claims of the complete algorithm with 20 rounds. Also, we provide a rebuttal of the Work by Wang et al. \cite{wangeprint} and analyze their claim about the error in the ``Divide-and-Conquer'' Approach.
Marc Fischlin, Aikaterini Mitrokotsa, Jenit Tomy
We explore advanced security notions for threshold signature schemes, focusing on Beyond UnForgeability Features (BUFF), introduced by Cremers et al. (S&P’21) in the non-threshold setting. The BUFF properties protect against attacks based on maliciously chosen keys, e.g., expropriating a message-signature pair under a new public key (called exclusive ownership). We first formalize these notions in the threshold setting and examine their relationships. Notably, unlike regular signature schemes, the hierarchy of variants of exclusive ownership notions only holds for threshold schemes if they are also robust.
We then present a generic compiler that transforms any threshold signature scheme to satisfy exclusive ownership, and message-bound signature properties with minimal overhead. Furthermore, we modify the threshold BLS signature scheme to achieve these additional properties without increasing the signature size. Lastly, we identify specific structures in threshold signature schemes where BUFF properties can be naturally extended from the underlying standard signature scheme, and we analyze and prove the security properties in some of the existing threshold schemes.
Keitaro Hashimoto, Shuichi Katsumata, Guillermo Pascual-Perez
The Message Layer Security (MLS) protocol has recently been standardized by the IETF. MLS is a scalable secure group messaging protocol expected to run more efficiently compared to the Signal protocol at scale, while offering a similar level of strong security. Even though MLS has undergone extensive examination by researchers, the majority of the works have focused on confidentiality.
In this work, we focus on the authenticity of the application messages exchanged in MLS. Currently, MLS authenticates every application message with an EdDSA signature and while manageable, the overhead is greatly amplified in the post-quantum setting as the NIST-recommended Dilithium signature results in a 40x increase in size. We view this as an invitation to explore new authentication modes that can be used instead. We start by taking a systematic view on how application messages are authenticated in MLS and categorize authenticity into four different security notions. We then propose several authentication modes, offering a range of different efficiency and security profiles. For instance, in one of our modes, COSMOS++, we replace signatures with one-time tokens and a MAC tag, offering roughly a 75x savings in the post-quantum communication overhead. While this comes at the cost of weakening security compared to the authentication mode used by MLS, the lower communication overhead seems to make it a worthwhile trade-off with security.
In this work, we focus on the authenticity of the application messages exchanged in MLS. Currently, MLS authenticates every application message with an EdDSA signature and while manageable, the overhead is greatly amplified in the post-quantum setting as the NIST-recommended Dilithium signature results in a 40x increase in size. We view this as an invitation to explore new authentication modes that can be used instead. We start by taking a systematic view on how application messages are authenticated in MLS and categorize authenticity into four different security notions. We then propose several authentication modes, offering a range of different efficiency and security profiles. For instance, in one of our modes, COSMOS++, we replace signatures with one-time tokens and a MAC tag, offering roughly a 75x savings in the post-quantum communication overhead. While this comes at the cost of weakening security compared to the authentication mode used by MLS, the lower communication overhead seems to make it a worthwhile trade-off with security.
Lucjan Hanzlik
This note demonstrates that the blind signature scheme based on cryptographic group actions, as proposed in ePrint paper 2025/397, fails to ensure blindness. Specifically, we construct an adversary that achieves a $1/8$ advantage in the blindness experiment. The attack leverages selective abort techniques (also known as selective failure attacks), a well-known strategy in the MPC literature.
Neha Jawalkar, Nishanth Chandran, Divya Gupta, Rahul Sharma, Arkaprava Basu
Secure Two-Party Computation (2PC) enables secure inference with cryptographic guarantees that protect the privacy of the model owner and client. However, it adds significant performance overhead. In this work, we make 2PC-based secure inference efficient while considering important deployment scenarios.
We observe that the hitherto unconsidered latency of fetching keys from storage significantly impacts performance, as does network speed. We design a Linear Secret Sharing (LSS)-based system $LSS^M$ and a Function Secret Sharing (FSS)-based system $FSS^M$ for secure inference, optimized for small key size and communication, respectively. Notably, our highly-optimized and hardware-aware CPU-based $LSS^M$ outperforms prior GPU-based LSS systems by up to $50\times$. We then show that the best choice between $LSS^M$ and $FSS^M$ depends on the deployment scenario.
In fact, under certain deployments, a combination of $LSS^M$ and $FSS^M$ can leverage heterogeneous processing across CPU and GPU. Such protocol-system co-design lets us outperform state-of-the-art secure inference systems
by up to $21\times$ (geomean $3.25\times$).
Subhranil Dutta, Aikaterini Mitrokotsa, Tapas Pal, Jenit Tomy
This paper presents the concept of a multi-client functional encryption (MC-FE) scheme for attribute-based inner product functions (AB-IP), initially proposed by Abdalla et al. [ASIACRYPT’20], in an unbounded setting. In such a setting, the setup is independent of vector length constraints, allowing secret keys to support functions of arbitrary lengths, and clients can dynamically choose vector lengths during encryption. The functionality outputs the sum of inner products if vector lengths and indices meet a specific relation, and all clients’ attributes satisfy the key’s policy. We propose the following constructions based on the matrix decisional Diffie-Hellman assumption in a natural permissive setting
of unboundedness:
– the first multi-client attribute-based unbounded IPFE (MC-AB-UIPFE) scheme secure in the standard model, overcoming previous limitations where clients could only encrypt fixed-length data; – the first multi-input AB-UIPFE (MI-AB-UIPFE) in the public key setting; improving upon prior bounded constructions under the same assumption; – the first dynamic decentralized UIPFE (DD-UIPFE); enhancing the dynamism property of prior works.
Technically, we follow the blueprint of Agrawal et al. [CRYPTO’23] but begin with a new unbounded FE called extended slotted unbounded IPFE. We first construct a single-input AB-UIPFE in the standard model and then extend it to multi-input settings. In a nutshell, our work demonstrates the applicability of function-hiding security of IPFE in realizing variants of multi-input FE capable of encoding unbounded length vectors both at the time of key generation and encryption.
– the first multi-client attribute-based unbounded IPFE (MC-AB-UIPFE) scheme secure in the standard model, overcoming previous limitations where clients could only encrypt fixed-length data; – the first multi-input AB-UIPFE (MI-AB-UIPFE) in the public key setting; improving upon prior bounded constructions under the same assumption; – the first dynamic decentralized UIPFE (DD-UIPFE); enhancing the dynamism property of prior works.
Technically, we follow the blueprint of Agrawal et al. [CRYPTO’23] but begin with a new unbounded FE called extended slotted unbounded IPFE. We first construct a single-input AB-UIPFE in the standard model and then extend it to multi-input settings. In a nutshell, our work demonstrates the applicability of function-hiding security of IPFE in realizing variants of multi-input FE capable of encoding unbounded length vectors both at the time of key generation and encryption.
Kyoohyung Han, Seongkwang Kim, Yongha Son
Private computation on common records refers to analyze data from two databases containing shared records without revealing personal information. As a basic requirement for private computation, the databases involved essentially need to be aligned by a common identification system. However, it is hard to expect such common identifiers in real world scenario. For this reason, multiple quasi-identifiers can be used to identify common records. As some quasi-identifiers might be missing or have typos, it is important to support fuzzy records setting. Identifying common records using quasi-identifiers requires manipulation of highly sensitive information, which could be privacy concerns.
This work studies the problem of enabling such data analysis on the fuzzy records of quasi-identifiers. To this end, we propose ordered threshold-one (OTO) matching which can be efficiently realized by circuit-based private set intersection (CPSI) protocols and some multiparty computation (MPC) techniques. Furthermore, we introduce some generic encoding techniques from traditional matching rules to the OTO matching. Finally, we achieve a secure efficient private computation protocol which supports various matching rules which have already been widely used.
We also demonstrate the superiority of our proposal with experimental validation. First, we empirically check that our encoding to OTO matching does not affect accuracy a lot for the benchmark datasets found in the fuzzy record matching literature. Second, we implement our protocol and achieve significantly faster performance at the cost of communication overhead compared to previous privacy-preserving record linkage (PPRL) protocols. In the case of 100K records for each dataset, our work shows 147.58MB communication cost, 10.71s setup time, and 1.97s online time, which is 7.78 times faster compared to the previous work (50.12 times faster when considering online time only).
This work studies the problem of enabling such data analysis on the fuzzy records of quasi-identifiers. To this end, we propose ordered threshold-one (OTO) matching which can be efficiently realized by circuit-based private set intersection (CPSI) protocols and some multiparty computation (MPC) techniques. Furthermore, we introduce some generic encoding techniques from traditional matching rules to the OTO matching. Finally, we achieve a secure efficient private computation protocol which supports various matching rules which have already been widely used.
We also demonstrate the superiority of our proposal with experimental validation. First, we empirically check that our encoding to OTO matching does not affect accuracy a lot for the benchmark datasets found in the fuzzy record matching literature. Second, we implement our protocol and achieve significantly faster performance at the cost of communication overhead compared to previous privacy-preserving record linkage (PPRL) protocols. In the case of 100K records for each dataset, our work shows 147.58MB communication cost, 10.71s setup time, and 1.97s online time, which is 7.78 times faster compared to the previous work (50.12 times faster when considering online time only).
Tzu-Hsiang Huang, Wei-Hsiang Hung, Shota Yamada
The evasive learning with errors (evasive LWE) assumption is a new assumption recently introduced by Wee (Eurocrypt 2022) and Tsabary (Crypto 2022) independently, as a significant strengthening of the standard LWE assumption.
While the assumption is known to imply various strong primitives including witness encryption [Wee22,Tsabary22], the assumption in the most general case (i.e., the private coin variant) is considered quite implausible due to the obfuscation based attack mentioned in [Wee22]. This obfuscation based attack is then later formalized by Vaikuntanathan, Wee, and Wichs [VWW22].
In this note, we revisit their attack and show that the attack actually does not work by showing a concrete counterexample. We then show that their attack can be made valid with some modifications. Along the way, we also improve the counterexample by making it provable. Specifically, our counterexample is valid assuming the (plain) LWE assumption and the existence of instance-hiding witness encryption, whereas their original counterexample was dependent on the heuristic assumption of the existence of an ideal obfuscation.
Ojaswi Acharya, Suvasree Biswas, Weiqi Feng, Adam O'Neill, Arkady Yerukhimovich
Consider a weak analyst that wishes to outsource data collection and computation of aggregate statistics over a a potentially large population of (also weak) clients to a powerful server. For flexibility and efficiency, we consider public-key and non-interactive protocols, meaning the clients know the analyst's public key but do not share secrets, and each client sends at most one message. Furthermore, the final step should be silent, whereby the analyst simply downloads the (encrypted) result from the server when needed. To capture this setting, we define a new primitive we call Non-Interactive Verifiable Aggregation (NIVA).
We require both privacy and robustness for a NIVA protocol to be deemed secure. Namely, our security notion for NIVA ensures that the clients' data remains private to both the server and the analyst, while also ensuring that malicious clients cannot skew the results by providing faulty data.
We propose a secure NIVA protocol, which we call PEAR (for Private, Efficient, Accurate, Robust), which can validate inputs according to any NP validity rule. PEAR is based on a novel combination of functional encryption for inner-products (Abdalla et al., PKC 2015) and fully-linear probabilistically-checkable proofs (Boneh et al., Crypto 2019). We emphasize that PEAR is non-interactive, public-key, and makes black-box use of the underlying cryptographic primitives. Additionally, we devise substantial optimizations of PEAR for practically-relevant validity rules. Finally, we implement PEAR to show feasibility for such validity rules, conducting a thorough performance evaluation. In particular, we compare PEAR to two more straightforward or "off-the-shelf" NIVA protocols and show performance gains, demonstrating the merit of our new approach. The bottleneck in our protocol comes from the fact that we require the underlying IPFE scheme to be "unrestricted" over a large field. As more efficient such schemes are developed, they can be immediately be plugged into PEAR for further gains.
Chaya Ganesh, Sikhar Patranabis, Nitin Singh
We study linear-time prover SNARKs and make the following contributions:
We provide a framework for transforming a univariate polynomial commitment scheme into a multilinear polynomial commitment scheme. Our transformation is generic, can be instantiated with any univariate scheme and improves on prior transformations like Gemini (EUROCRYPT 2022) and Virgo (S&P 2020) in all relevant parameters: proof size, verification complexity, and prover complexity. Instantiating the above framework with the KZG univariate polynomial commitment scheme, we get SamaritanPCS – the first multilinear polynomial commitment scheme with constant proof size and linear-time prover. SamaritanPCS is a drop-in replacement for the popular PST scheme, and improves upon PST in all relevant parameters.
We construct LogSpartan – a new multilinear PIOP for R1CS based on recent techniques for lookup arguments. Compiling this PIOP using SamaritanPCS gives Samaritan – a SNARK in the universal and updatable SRS setting. Samaritan has linear-time prover, logarithmic verification and logarithmic proof size. Concretely, its proof size is one of the smallest among other known linear-time prover SNARKs without relying on concretely expensive proof recursion techniques. For an R1CS instance with 1 million constraints, Samaritan (over BLS12-381 curve) has a proof size of 6.7KB.
We compare Samaritan with other linear-time prover SNARKs in the updatable setting. We asymptotically improve on the $\log^2 n$ proof size of Spartan. Unlike Libra (CRYPTO 2019), the argument size of Samaritan is independent of the circuit depth. Compared to Gemini (EUROCRYPT 2022), Samaritan achieves 3$\times$ smaller argument size at 1 million constraints. We match the argument size of HyperPlonk, which is the smallest linear-time SNARK for the Plonkish constraint system, while achieving slightly better verification complexity.
We believe that our transformation and our techniques for applying lookups based on logarithmic derivatives to the multilinear setting are of wider interest.
We provide a framework for transforming a univariate polynomial commitment scheme into a multilinear polynomial commitment scheme. Our transformation is generic, can be instantiated with any univariate scheme and improves on prior transformations like Gemini (EUROCRYPT 2022) and Virgo (S&P 2020) in all relevant parameters: proof size, verification complexity, and prover complexity. Instantiating the above framework with the KZG univariate polynomial commitment scheme, we get SamaritanPCS – the first multilinear polynomial commitment scheme with constant proof size and linear-time prover. SamaritanPCS is a drop-in replacement for the popular PST scheme, and improves upon PST in all relevant parameters.
We construct LogSpartan – a new multilinear PIOP for R1CS based on recent techniques for lookup arguments. Compiling this PIOP using SamaritanPCS gives Samaritan – a SNARK in the universal and updatable SRS setting. Samaritan has linear-time prover, logarithmic verification and logarithmic proof size. Concretely, its proof size is one of the smallest among other known linear-time prover SNARKs without relying on concretely expensive proof recursion techniques. For an R1CS instance with 1 million constraints, Samaritan (over BLS12-381 curve) has a proof size of 6.7KB.
We compare Samaritan with other linear-time prover SNARKs in the updatable setting. We asymptotically improve on the $\log^2 n$ proof size of Spartan. Unlike Libra (CRYPTO 2019), the argument size of Samaritan is independent of the circuit depth. Compared to Gemini (EUROCRYPT 2022), Samaritan achieves 3$\times$ smaller argument size at 1 million constraints. We match the argument size of HyperPlonk, which is the smallest linear-time SNARK for the Plonkish constraint system, while achieving slightly better verification complexity.
We believe that our transformation and our techniques for applying lookups based on logarithmic derivatives to the multilinear setting are of wider interest.
Ross Evans, Matthew McKague, Douglas Stebila
Cryptographic proofs allow researchers to provide theoretical guarantees on the security that their constructions provide. A proof of security can completely eliminate a class of attacks by potential adversaries. Human fallibility, however, means that even a proof reviewed by experts may still hide flaws or outright errors. Proof assistants are software tools built for the purpose of formally verifying each step in a proof, and as such have the potential to prevent erroneous proofs from being published and insecure constructions from being implemented.
Unfortunately, existing tooling for verifying cryptographic proofs has found limited adoption in the cryptographic community, in part due to concerns with ease of use. We present ProofFrog: a new tool for verifying cryptographic game-hopping proofs. ProofFrog is designed with the average cryptographer in mind, using an imperative syntax similar to C for specifying games and a syntax for proofs that closely models pen-and-paper arguments. As opposed to other proof assistant tools which largely operate by manipulating logical formulae, ProofFrog manipulates abstract syntax trees (ASTs) into a canonical form to establish indistinguishable or equivalent behaviour for pairs of games in a user-provided sequence. We also detail the domain-specific language developed for use with the ProofFrog proof engine, the exact transformations it applies to canonicalize ASTs, and case studies of verified proofs. A tool like ProofFrog that prioritizes ease of use can lower the barrier of entry to using computer-verified proofs and aid in catching insecure constructions before they are made public.
Unfortunately, existing tooling for verifying cryptographic proofs has found limited adoption in the cryptographic community, in part due to concerns with ease of use. We present ProofFrog: a new tool for verifying cryptographic game-hopping proofs. ProofFrog is designed with the average cryptographer in mind, using an imperative syntax similar to C for specifying games and a syntax for proofs that closely models pen-and-paper arguments. As opposed to other proof assistant tools which largely operate by manipulating logical formulae, ProofFrog manipulates abstract syntax trees (ASTs) into a canonical form to establish indistinguishable or equivalent behaviour for pairs of games in a user-provided sequence. We also detail the domain-specific language developed for use with the ProofFrog proof engine, the exact transformations it applies to canonicalize ASTs, and case studies of verified proofs. A tool like ProofFrog that prioritizes ease of use can lower the barrier of entry to using computer-verified proofs and aid in catching insecure constructions before they are made public.
William J Buchanan, Hisham Ali
The requirement for privacy-aware machine learning increases as we continue to use PII (Personally Identifiable Information) within machine training. To overcome these privacy issues, we can apply Fully Homomorphic Encryption (FHE) to encrypt data before it is fed into a machine learning model. This involves creating a homomorphic encryption key pair, and where the associated public key will be used to encrypt the input data, and the private key will decrypt the output. But, there is often a performance hit when we use homomorphic encryption, and so this paper evaluates the performance overhead of using the SVM machine learning technique with the OpenFHE homomorphic encryption library. This uses Python and the scikit-learn library for its implementation. The experiments include a range of variables such as multiplication depth, scale size, first modulus size, security level, batch size, and ring dimension, along with two different SVM models, SVM-Poly and SVM-Linear. Overall, the results show that the two main parameters which affect performance are the ring dimension and the modulus size, and that SVM-Poly and SVM-Linear show similar performance levels.
Damiano Abram, Giulio Malavolta, Lawrence Roy
Trapdoor hash functions (TDHs) are compressing hash functions, with an additional trapdoor functionality: Given a encoding key for a function $f$, a hash on $x$ together with a (small) input encoding allow one to recover $f(x)$. TDHs are a versatile tool and a useful building block for more complex cryptographic protocols.
In this work, we propose the first TDH construction assuming the (quasi-polynomial) hardness of the LPN problem with noise rate $\epsilon = O(\log^{1+\beta} n / n)$ for $\beta>0$, i.e., in the so-called low-noise regime. The construction achieves $2^{\Theta(\log^{1-\beta} \lambda)}$ compression factor. As an application, we obtain a private-information retrieval (PIR) with communication complexity $L / 2^{\Theta(\log^{1-\beta} L)}$, for a database of size L. This is the first PIR scheme with non-trivial communication complexity (asymptotically smaller than $L$) from any code-based assumption.
In this work, we propose the first TDH construction assuming the (quasi-polynomial) hardness of the LPN problem with noise rate $\epsilon = O(\log^{1+\beta} n / n)$ for $\beta>0$, i.e., in the so-called low-noise regime. The construction achieves $2^{\Theta(\log^{1-\beta} \lambda)}$ compression factor. As an application, we obtain a private-information retrieval (PIR) with communication complexity $L / 2^{\Theta(\log^{1-\beta} L)}$, for a database of size L. This is the first PIR scheme with non-trivial communication complexity (asymptotically smaller than $L$) from any code-based assumption.
Miguel Cueto Noval, Simon-Philipp Merz, Patrick Stählin, Akin Ünal
We study recent algebraic attacks (Briaud-Øygarden EC'23) on the Regular Syndrome Decoding (RSD) problem and the assumptions underlying the correctness of their attacks' complexity estimates. By relating these assumptions to interesting algebraic-combinatorial problems, we prove that they do not hold in full generality. However, we show that they are (asymptotically) true for most parameter sets, supporting the soundness of algebraic attacks on RSD. Further, we prove—without any heuristics or assumptions—that RSD can be broken in polynomial time whenever the number of error blocks times the square of the size of error blocks is larger than 2 times the square of the dimension of the code.
Additionally, we use our methodology to attack a variant of the Learning With Errors problem where each error term lies in a fixed set of constant size. We prove that this problem can be broken in polynomial time, given a sufficient number of samples. This result improves on the seminal work by Arora and Ge (ICALP'11), as the attack's time complexity is independent of the LWE modulus.
Additionally, we use our methodology to attack a variant of the Learning With Errors problem where each error term lies in a fixed set of constant size. We prove that this problem can be broken in polynomial time, given a sufficient number of samples. This result improves on the seminal work by Arora and Ge (ICALP'11), as the attack's time complexity is independent of the LWE modulus.
Deimos Cipher: A High-Entropy, Secure Encryption Algorithm with Strong Diffusion and Key Sensitivity
Mohsin Belam
Deimos Cipher is a symmetric encryption algorithm designed to achieve high entropy and strong diffusion while maintaining efficiency.
It employs advanced cryptographic transformations to ensure robust security against modern cryptanalysis techniques.
Entropy tests demonstrate its ability to generate highly randomized ciphertext, surpassing industry standards.
Avalanche effect analysis confirms optimal diffusion, achieving an average bit change of 50.18% in large datasets.
Key sensitivity tests reveal a 50.54% ciphertext difference for minimal key variations, ensuring strong resistance to differential cryptanalysis.
With fast encryption and decryption speeds, Deimos Cipher offers a balanced approach between security and performance, making it suitable for secure communication and data protection.
This paper presents the algorithm's design, security analysis, and benchmarking against established cryptographic standards.
04 March 2025
Mohammad Hashemi, Shahin Tajik, Fatemeh Ganji
The introduction of shared computation architectures assembled from
heterogeneous chiplets introduces new security threats. Due to the shared logical and physical resources, an untrusted chiplet can act maliciously to surreptitiously probe the data communication between chiplets or sense the computation shared between them. This paper presents Garblet, the first framework to leverage the flexibility offered by chiplet technology and Garbled Circuits (GC)-based MPC to enable efficient, secure computation even in the presence of potentially compromised chiplets. Our approach integrates a customized hardware Oblivious Transfer (OT) module and an optimized evaluator engine into chiplet-based platforms. This configuration distributes the tasks of garbling and evaluating circuits across two chiplets, reducing communication costs and enhancing computation speed. We implement this framework on an AMD/Xilinx UltraScale+ multi-chip module and demonstrate its effectiveness using benchmark functions. Additionally, we introduce a novel circuit decomposition technique that allows for parallel processing across multiple chiplets to further improve computational efficiency. Our results highlight the potential of chiplet systems for accelerating GC (e.g., the time complexity of garbled AES is 0.0226ms) in order to guarantee the security and privacy of the computation on chiplets.
Rishab Goyal, Saikumar Yadugiri
Decentralization is a great enabler for adoption of modern cryptography in real-world systems. Widespread adoption of blockchains and secure multi-party computation protocols are perfect evidentiary examples for dramatic rise in deployment of decentralized cryptographic systems. Much of cryptographic research can be viewed as reducing (or eliminating) the dependence on trusted parties, while shielding from stronger adversarial threats. In this work, we study the problem of multi-authority functional encryption (MAFE), a popular decentralized generalization of functional encryption (FE). Our main contributions are:
1. We design MAFE for all poly-sized circuits, in the bounded collusion model, under the minimal assumption of PKE/OWFs. Prior to our work, this required either sub-exponentially secure obfuscation, or $\log n$-party key exchange, or Random Oracles and sub-exponentially secure PKE. We also extend our constructions to the dynamic collusion model under the minimal assumptions of IBE/OWFs. Unlike all prior works, our MAFE systems are truly dynamic and put no restrictions on the maximum number of authorities.
2. Under the hardness of learning with errors (LWE) assumption, we design MAFE for all poly-sized circuits where we allow adversaries to adaptively corrupt local authorities. We allow an adversary to corrupt any $k$ out of $n$ local authorities as long as ${{n}\choose{k}}$ = poly$(\lambda)$. Prior to this, such MAFE relied on sub-exponentially secure obfuscation. Additionally, we design a new MAFE compiler for boosting selective authority corruptions to non-adaptive authority corruptions.
3. We prove a tight implication from MAFE to (VBB/indistinguishability) obfuscation. We show that MAFE implies obfuscation only if the number of attribute bits (jointly) controlled by all corrupt local authorities is $\omega(\log \lambda)$. This proves optimality of our second result for a wide range of parameters.
4. Finally, we propose a new MAFE system that we refer to as multi-authority attribute-based functional encryption (MA-ABFE). We view it as an approach to get best of both worlds (fully collusion resistant MA-ABE, and bounded collusion resistant MAFE). By combining our results with prior MA-ABE results, we obtain MA-ABFE for $\mathsf{NC}^1 \circ \mathsf{P}/\mathsf{Poly}$ from standard pairing-based assumptions, and for $\mathsf{DNF} \circ \mathsf{P}/\mathsf{Poly}$ from LWE, both in the Random Oracle Model. We also describe a simple construction of MA-ABE for general predicates from witness encryption, and combining with known results, we also get MA-ABFE for $\mathsf{P}/\mathsf{Poly} \circ \mathsf{P}/\mathsf{Poly}$ from evasive LWE.
1. We design MAFE for all poly-sized circuits, in the bounded collusion model, under the minimal assumption of PKE/OWFs. Prior to our work, this required either sub-exponentially secure obfuscation, or $\log n$-party key exchange, or Random Oracles and sub-exponentially secure PKE. We also extend our constructions to the dynamic collusion model under the minimal assumptions of IBE/OWFs. Unlike all prior works, our MAFE systems are truly dynamic and put no restrictions on the maximum number of authorities.
2. Under the hardness of learning with errors (LWE) assumption, we design MAFE for all poly-sized circuits where we allow adversaries to adaptively corrupt local authorities. We allow an adversary to corrupt any $k$ out of $n$ local authorities as long as ${{n}\choose{k}}$ = poly$(\lambda)$. Prior to this, such MAFE relied on sub-exponentially secure obfuscation. Additionally, we design a new MAFE compiler for boosting selective authority corruptions to non-adaptive authority corruptions.
3. We prove a tight implication from MAFE to (VBB/indistinguishability) obfuscation. We show that MAFE implies obfuscation only if the number of attribute bits (jointly) controlled by all corrupt local authorities is $\omega(\log \lambda)$. This proves optimality of our second result for a wide range of parameters.
4. Finally, we propose a new MAFE system that we refer to as multi-authority attribute-based functional encryption (MA-ABFE). We view it as an approach to get best of both worlds (fully collusion resistant MA-ABE, and bounded collusion resistant MAFE). By combining our results with prior MA-ABE results, we obtain MA-ABFE for $\mathsf{NC}^1 \circ \mathsf{P}/\mathsf{Poly}$ from standard pairing-based assumptions, and for $\mathsf{DNF} \circ \mathsf{P}/\mathsf{Poly}$ from LWE, both in the Random Oracle Model. We also describe a simple construction of MA-ABE for general predicates from witness encryption, and combining with known results, we also get MA-ABFE for $\mathsf{P}/\mathsf{Poly} \circ \mathsf{P}/\mathsf{Poly}$ from evasive LWE.
Nathalie Lang, Stefan Lucks, Bart Mennink, Suprita Talnikar
We examine the post-quantum security of the Ascon authenticated encryption (AE) mode. In spite of comprehensive research of Ascon's classical security, the potential impact of quantum adversaries on Ascon has not yet been explored much. We investigate the generic security of the Ascon AE mode in the setting where the adversary owns a quantum computer to improve its attack, while the adversarial encryption or decryption queries are still classical. In this so-called Q1 model, Ascon achieves security up to approximately $\min\{2^{c/3},2^{k/2}\}$ evaluations, where $c$ is the capacity, $k$ the key size, and the adversary is block-wise adaptive but restricted to one forgery attempt. Our technique is based on applying the semi-classical one-way to hiding (O2H) lemma, and on tailoring the puncture set to the Ascon mode.
Additionally, we discuss different parameter choices for Ascon and compare our results to generic quantum attacks, such as Grover-based key search and state recovery.
Théophile Wallez, Jonathan Protzenko, Karthikeyan Bhargavan
The Messaging Layer Security (MLS) protocol standard proposes a novel tree-based protocol that enables efficient end-to-end encrypted messaging over large groups with thousands of members. Its functionality can be divided into three components: TreeSync for authenticating and synchronizing group state, TreeKEM for the core group key agreement, and TreeDEM for group message encryption. While previous works have analyzed the security of abstract models of TreeKEM, they do not account for the precise low-level details of the protocol standard. This work presents the first machine-checked security proof for TreeKEM. Our proof is in the symbolic Dolev-Yao model and applies to a bit-level precise, executable, interoperable specification of the protocol. Furthermore, our security theorem for TreeKEM composes naturally with a previous result for TreeSync to provide a strong modular security guarantee for the published MLS standard.