International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

12 December 2025

Anisha Dutta, Sayantan Chakraborty, Chandan Goswami, Avishek Adhikari
ePrint Report ePrint Report
The rapid growth of quantum technologies highlights the need for scalable models of computation that go beyond qubits and exploit the richer structure of Qudits. This paper introduces a novel and efficient approach for defining universal gate sets specifically designed for higher-dimensional Qudit systems (N ≥ 2), addressing the limitations of traditional qubit-based approaches. We present a systematic methodology for constructing fundamental Qudit gates through the inherent structure of Qudit operators, providing a robust theoretical foundation for universal quantum computation with Qudits. Our rigorously proven universal and minimal gate set enables more efficient quantum circuit design. We demonstrate the construction of multidimensional extensions of controlled operations from these fundamental elements. To facilitate practical application, we provide a Python-based algorithm for decomposing arbitrary multi-Qudit operations, accompanied by detailed time- and space-complexity analyses. The framework is further validated through end-to-end implementations of Grover’s algorithm and QKD, comparing traditional gate constructions with circuits entirely synthesized from the proposed universal gate set. These validations demonstrate not only the functional equivalence of the decomposed circuits, but also their direct relevance to the advancement of cryptographic protocols, paving the way for more efficient and secure Qudit-based quantum cryptography.
Expand
Jonas Hofmann, Philipp-Florens Lehwalder, Shahriar Ebrahimi, Parisa Hassanizadeh, Sebastian Faust
ePrint Report ePrint Report
Remote attestation is a fundamental security mechanism for assessing the integrity of remote devices. In practice, widespread adoption of attestation schemes is hindered by a lack of public verifiability and the requirement for interaction in existing protocols. A recent work by Ebrahimi et al. (NDSS'24) constructs publicly verifiable, non-interactive remote attestation, disregarding another important requirement for attesting sensitive systems: privacy protection. Similar needs arise in IoT swarms, where many devices, potentially processing sensitive data, should produce a single attestation.

In this paper, we take on both challenges. We present PIRANHAS, a publicly verifiable, asynchronous, and anonymous attestation scheme for individual devices and swarms. We leverage zk-SNARKs to transform any classical, symmetric remote attestation scheme into a non-interactive, publicly verifiable, and anonymous one. Verifiers only ascertain the validity of the attestation, without learning any identifying information about the involved devices.

For IoT swarms, PIRANHAS aggregates attestation proofs for the entire swarm using recursive zk-SNARKs. Our system supports arbitrary network topologies and allows nodes to dynamically join and leave the network. We provide formal security proofs for the single-device and swarm setting, showing that our construction meets the desired security guarantees. Further, we provide an open-source implementation of our scheme using the Noir and Plonky2 framework, achieving an aggregation runtime of just 356ms.
Expand
Yuanmi Chen, Zhao Chen, Tingting Guo, Chao Sun, Weiqiang Wen, Yu Yu
ePrint Report ePrint Report
We incorporate a meet-in-the-middle strategy into the enumeration algorithm, enabling a tunable time-memory trade-off. The algorithm can achieve a minimum asymptotic time complexity of \(n^{n/(4e) + o(n)}\), which, in return, demands memory of the same order. This represents a square-root improvement over the state-of-the-art enumeration algorithms. More generally, our approach attains a time complexity of~$n^{cn/(2e) + o(n)}$ with memory usage \(n^{(1-c)n/(2e) + o(n)}\), where $c$ is any constant satisfying \(\frac{1}{2} \leq c < 1\).

Our approach decouples the head and tail blocks of the lattice basis. For a properly selected parameter, each enumeration space becomes asymptotically the square root of the original search space. Each tail vector is then extended to the head block space to find its closest vectors using an efficient neighboring search algorithm. Among all pairs of neighboring vectors that we iterate through, the shortest difference vector is then the solution to the Shortest Vector Problem (SVP).

Apart from the exact version of the algorithm which is of theoretical interest, we also propose heuristic strategies to improve the practical efficiency. First, we show the adaptation of our algorithm to pruned enumeration. Then we show that with a particularly chosen backbone lattice (rescaled~\(\mathbb{Z}^n\)), we are able to accelerate the neighboring search process to an extremely efficient degree. Finally, we optimize parameters and give a practical cost estimation to show how much acceleration we could bring using this new algorithm.
Expand
Clément Hoffmann, Pierrick Méaux, Charles Momin, Yann Rotella, François-Xavier Standaert, Balazs Udvarhelyi
ePrint Report ePrint Report
Fresh re-keying is a countermeasure against side-channel analysis where an ephemeral key is derived from a long-term key using a public random value. Popular instances of such schemes rely on key-homomorphic primitives, so that the re-keying process is easy to mask and the rest of the (e.g., block cipher) computations can run with cheaper countermeasures. The main requirement for these schemes to be secure is that the leakages of the ephemeral keys do not allow recovering the long-term key. The Learning with Physical Rounding (LWPR) problem formalizes this security in a practically-relevant model where the adversary can observe noise-free leakages. It can be viewed as a physical version of the Learning With Rounding (LWR) problem, where the rounding is performed by a leakage function and therefore does not have to be computed explicitly. In this paper, we first consolidate the intuition that LWPR cannot be secure in a serial implementation context without additional countermeasures (like shuffling), due to attacks exploiting worst-case leakages that can be mounted with practical data complexity. We then extend the understanding of LWPR in a parallel implementation setting. On the one hand, we generalize its robustness against cryptanalysis taking advantage of any (i.e., not only worst-case) leakage. A previous work claimed security in the specific context of a Hamming weight leakage function. We clarify necessary conditions to maintain this guarantee, based on the degree of the leakage function and the accuracy of its coefficients. On the other hand, we show that parallelism inherently provides good security against attacks exploiting worst-case leakages. We finally confirm the practical relevance of these findings by validating our assumptions experimentally for an exemplary implementation.
Expand
Clément Hoffmann, Pierrick Méaux, Mélissa Rossi, François-Xavier Standaert
ePrint Report ePrint Report
Learning problems have become a foundational element for constructing quantum-resistant cryptographic schemes, finding broad application even beyond, such as in Fully Homomorphic Encryption. The increasing complexity of this field, marked by the rise of physical learning problems due to research into side-channel leakage and secure hardware implementations, underscores the urgent need for a more comprehensive analytical framework capable of encompassing these diverse variants.

In response, we introduce Learning With Error with Output Dependencies (LWE-OD), a novel learning problem defined by an error distribution that depends on the inner product value and therefore on the key. LWE-OD instances are remarkably versatile, generalizing both established theoretical problems like Learning With Errors (LWE) or Learning With Rounding (LWR), and emerging physical problems such as Learning With Physical Rounding (LWPR).

Our core contribution is establishing a reduction from LWE-OD to LWE. This is accomplished by leveraging an intermediate problem, denoted qLWE. Our reduction follows a two-step, simulator-based approach, yielding explicit conditions that guarantee LWE-OD is at least as computationally hard as LWE. While this theorem provides a valuable reduction, it also highlights a crucial distinction among reductions: those that allow explicit calculation of target distributions versus weaker ones with conditional results. To further demonstrate the utility of our framework, we offer new proofs for existing results, specifically the reduction from LWR to LWE and from Learning Parity with Noise with Output Dependencies (LPN-OD) to LPN. This new reduction opens the door for a potential reduction from LWPR to LWE.
Expand
Friedrich Wiemer, Arthur Mutter, Jonathan Ndop, Julian Göppert, Axel Sikora, Thierry Walrant
ePrint Report ePrint Report
In the past, Secure Onboard Communication (SecOC) has been defined to serve as the foundational mechanism for securing in-vehicle networks. For over a decade, it has been used in hundreds of millions of automotive systems. Its application-layer design and AUTOSAR-based specification have enabled broad adoption across diverse platforms. However, this design also introduces challenges: software-centric dependencies complicate full hardware integration and can limit scalability in resource-constrained electronic control units, particularly as bandwidth demands continue to grow.

To address these constraints, CANsec has been proposed to address the security objectives of CAN XL. As a Layer 2 security protocol, CANsec aims to overcome SecOC’s shortcomings and offer modern guarantees comparable to MACsec. However, the CANsec specification remains under active development even after several years of work, leaving a gap between conceptual goals and practical deployment.

This paper proposes a pragmatic and standards-aligned solution: it re-uses existing MACsec specifications and implementations as the security engine for CAN XL.

MACsec, standardized for over two decades and widely scrutinized in both academic and industrial contexts, offers robust and well-understood security functions. Our approach introduces a lightweight wrapper that maps CAN XL frames to virtual Ethernet frames, enabling MACsec to provide confidentiality, integrity, authenticity, and freshness. We formally define the wrapping process, frame formats, and protocol data units, while preserving MACsec’s security properties, adapting them to the constraints and requirements of automotive networks. This enables a practical and secure path forward for CAN XL deployment, leveraging mature cryptographic algorithms and protocols without compromising performance or assurance.

To support standardization and practical adoption, we have submitted this approach to the CAN in Automation (CiA) CANsec specification task force, CiA's IG 04 SIG 01 TF 03 CAN XL security, contributing to the ongoing effort to define an efficient, standardized and interoperable security solution for CAN XL.
Expand
Alan T. Sherman, Jeremy J. Romanik Romano, Edward Zieglar, Enis Golaszewski, Jonathan D. Fuchs, William E. Byrd
ePrint Report ePrint Report
We analyze security aspects of the SecureDNA system regarding its system design, engineering, and implementation. This system enables DNA synthesizers to screen order requests against a database of hazards. By applying novel cryptography involving distributed oblivious pseudorandom functions, the system aims to keep order requests and the database of hazards secret. Discerning the detailed operation of the system in part from source code (Version 1.0.8), our analysis examines key management, certificate infrastructure, authentication, and rate-limiting mechanisms. We also perform the first formal-methods analysis of the mutual authentication, basic request, and exemption-handling protocols. Without breaking the cryptography, our main finding is that SecureDNA's custom mutual authentication protocol SCEP achieves only one-way authentication: the hazards database and keyservers never learn with whom they communicate. This structural weakness violates the principle of defense in depth and enables an adversary to circumvent rate limits that protect the secrecy of the hazards database, if the synthesizer connects with a malicious or corrupted keyserver or hashed database. We point out an additional structural weakness that also violates the principle of defense in depth: inadequate cryptographic bindings prevent the system from detecting if responses, within a TLS channel, from the hazards database were modified. Consequently, if a synthesizer were to reconnect with the database over the same TLS session, an adversary could replay and swap responses from the database without breaking TLS. Although the SecureDNA implementation does not allow such reconnections, it would be stronger security engineering to avoid the underlying structural weakness. We identify these vulnerabilities and suggest and verify mitigations, including adding strong bindings. Software Version 1.1.0 fixes SCEP with our proposed SCEP+ protocol. Our work illustrates that a secure system needs more than sound mathematical cryptography; it also requires formal specifications, sound key management, proper binding of protocol message components, and careful attention to engineering and implementation details.
Expand
Miranda Christ, Noah Golowich, Sam Gunn, Ankur Moitra, Daniel Wichs
ePrint Report ePrint Report
Watermarks are an essential tool for identifying AI-generated content. Recently, Christ and Gunn (CRYPTO '24) introduced pseudorandom error-correcting codes (PRCs), which are equivalent to watermarks with strong robustness and quality guarantees. A PRC is a pseudorandom encryption scheme whose decryption algorithm tolerates a high rate of errors. Pseudorandomness ensures quality preservation of the watermark, and error tolerance of decryption translates to the watermark's ability to withstand modification of the content.

In the short time since the introduction of PRCs, several works (NeurIPS '24, RANDOM '25, STOC '25) have proposed new constructions. Curiously, all of these constructions are vulnerable to quasipolynomial-time distinguishing attacks. Furthermore, all lack robustness to edits over a constant-sized alphabet, which is necessary for a meaningfully robust LLM watermark. Lastly, they lack robustness to adversaries who know the watermarking detection key. Until now, it was not clear whether any of these properties was achievable individually, let alone together.

We construct pseudorandom codes that achieve all of the above: plausible subexponential pseudorandomness security, robustness to worst-case edits over a binary alphabet, and robustness against even computationally unbounded adversaries that have the detection key. Pseudorandomness rests on a new assumption that we formalize, the permuted codes conjecture, which states that a distribution of permuted noisy codewords is pseudorandom. We show that this conjecture is implied by the permuted puzzles conjecture used previously to construct doubly efficient private information retrieval. To give further evidence, we show that the conjecture holds against a broad class of simple distinguishers, including read-once branching programs.
Expand
Magali Salom, Nicolas Sendrier, Valentin Vasseur
ePrint Report ePrint Report
QC-MDPC based schemes feature secret sparse cyclic binary vectors. When those vectors are sparse enough, they can be reconstructed from their distance spectrum, that is the set of all distances between the coordinates of the non-zero coefficients. In this work, we revisit the reconstruction algorithms and we explore to what extent a secret sparse vector can be recovered from a partial knowledge of its distance spectrum. In particular, we show how to efficiently use reliability (soft information) in the reconstruction process. Another aspect of this work is to investigate which kind of side-channel leaks information about the distance spectrum and to understand the models that enable us to quantify the reliability on leaking data depending on the amount of side-channel observations (or queries). For instance, we show that for BIKE level 1, assuming that a side-channel leaks information about the syndrome weight, using soft information in the reconstruction process reduces the number of queries by a factor 10. Our technique can also be applied to HQC, which also features sparse secret vector, with similar figures, assuming there exists a side-channel leaking relevant information, the error weight in the case of HQC.
Expand
Suleyman Kardas, Mehmet Sabir Kiraz, Dmitry Savonin, Yao Wang, Aliaksei Dziadziuk
ePrint Report ePrint Report
Zero-knowledge virtual machines (zkVMs) must correctly model all edge cases of lowlevel machine instructions, including division and remainder, while keeping algebraic constraints simple enough for efficient STARK proving. This work presents a focused but meaningful redesign of the DivRemChip used in the SP1 zkVM (as adapted to rWasm) to implement WASM’s 32-bit division and remainder instructions (i32.div{u,s}, i32.rem{u,s}) over the BabyBear field. Our chip remains local (single-row AIR) and is purpose-built so that all “small aggregate” expressions are strictly bounded by the BabyBear modulus, allowing us to use inverses securely without ever inverting zero. We replace heavier constant-equality logic and trap gadgets with: (a) a lightweight small-aggregate zero-test for magnitudes that is sound in BabyBear, (b) a simple, byte-level mechanism for detecting the special value INT MIN using only degree-2 constraints, (c) a low-degree test for c = −1 based on checking |c| = 1 together with the sign bit, and (d) a constraint pattern ensuring that divide-by-zero and the overflow case i32.div s(INT MIN,-1) are not provable, matching WASM trap semantics. The resulting AIR has maximal degree 3, removes all “invert-zero” failure modes in BabyBear, and enforces the correct semantics for every non-trapping instruction. A malicious prover test framework based on the SP1 CPU bus shows that any incorrect witness causes constraint failure, while honest execution traces for extensive boundary tests and mixed programs produce valid proofs over BabyBear. Our design also improves efficiency: the trace width drops from 102 to 74 columns, all Mul/Add/Lt/MSB cross-chip calls are removed, and each row requires only a single CPU-bus interaction. Prover benchmarks on unsigned, signed, and mixed workloads with up to 125,000 division/remainder operations show a consistent 4–6% reduction in single-threaded proving time compared to the original SP1 DivRemChip as adapted to rWasm, with a paired t-test across program sizes confirming that the improvement is statistically significant at the 95% confidence level.
Expand
Mohamed Malhou, Ludovic Perret, Kristin Lauter
ePrint Report ePrint Report
At NeurIPS 2024, Kera et al. introduced the use of transformers for computing Groebner bases, a central object in computer algebra with numerous practical applications. In this paper, we improve this approach by applying Hierarchical Attention Transformers (HATs) to solve systems of multivariate polynomial equations via Groebner bases computation. The HAT architecture incorporates a tree-structured inductive bias that enables the modeling of hierarchical relationships present in the data and thus achieves significant computational savings compared to conventional flat attention models. We generalize to arbitrary depths and include a detailed computational cost analysis. Combined with curriculum learning, our method solves instances that are much larger than those in Kera et al. (2024 Learning to compute Groebner bases)
Expand
Antoine Mesnard, Jean-Pierre Tillich, Valentin Vasseur
ePrint Report ePrint Report
Many important code-based cryptographic schemes such as the NIST post-quantum competition finalist BIKE and the to be standardized HQC scheme rely on Quasi-Cyclic Moderate-Density Parity-Check codes (QC-MDPC). A very important issue here is to predict accurately the Decoding Failure Rate (DFR). This DFR is intimately connected to the syndrome weight distribution of the QC-MDPC codes used in these schemes. This problem is treated in HQC by modeling the syndrome bits by Bernoulli variables which is known to be inaccurate. The rationale is that it gives a pessimistic estimate of the DFR. In BIKE the syndrome weight is modeled by the syndrome weight of a regular MDPC code which is itself computed by a simplified model. The accuracy of this modeling is not well understood. NIST perceived that BIKE DFR estimation lacked maturity. This led to its dismissal in the competition. The purpose of this paper is to advance on this difficult issue of understanding the syndrome weight distribution of quasi-cyclic codes. Our contribution here is threefold. First we provide a rigorous tool for computing the syndrome weight of a regular code through a generating function and a saddle point approximation. We use this approach to show that the Markov chain model used for estimating the syndrome weight in [ABP24] is remarkably accurate. Second, we also prove that the regular model is not accurate for very low syndrome weights and provide a complete model of the syndrome weight distribution of a QC-MDPC code which can at the same time be computed quickly and fits remarkably well the experiments. We use this to show that for BIKE the probability of the events where the regular model differs from the QC-MDPC syndrome distribution is too low to be of concern. We also show that the variance of the syndrome weight distribution of a QC-MDPC code can be computed efficiently and is a handy tool for estimating accurately the syndrome weight distribution in the moderate deviation regime. We use it to give an accurate prediction of the DFR for a given key of HQC. This gives compelling evidence that the DFR of a typical secret key of HQC is significantly below $2^{- \lambda}$ where $\lambda$ is the security parameter and that weak keys for HQC are too rare to be of concern.
Expand
Keitaro Hiwatashi, Reo Eriguchi
ePrint Report ePrint Report
Private Simultaneous Messages (PSM) is a minimal model for secure computation, where two parties, Alice and Bob, have private inputs $x,y$ and a shared random string. Each of them sends a single message to an external party, Charlie, who can compute $f(x,y)$ for a public function $f$ but learns nothing else. The problem of narrowing the gap between upper and lower bounds on the communication complexity of PSM has been widely studied, but the gap still remains exponential. In this work, we study the communication complexity of PSM from a different perspective and introduce a special class of PSM, referred to as \textit{ideal PSM}, in which each party's message length attains the minimum, that is, their messages are taken from the same domain as inputs. We initiate a systematic study of ideal PSM with a complete characterization, several positive results, and applications. First, we provide a characterization of the class of functions that admit ideal PSM, based on permutation groups acting on the input domain. This characterization allows us to derive asymptotic upper bounds on the total number of such functions and a complete list for small domains. We also present several infinite families of functions of practical interest that admit ideal PSM. Interestingly, by simply restricting the input domains of these ideal PSM schemes, we can recover most of the existing PSM schemes that achieve the best known communication complexity in various computation models. As applications, we show that these ideal PSM schemes yield novel communication-efficient PSM schemes for functions with sparse or dense truth-tables and those with low-rank truth-tables. Furthermore, we obtain a PSM scheme for general functions that improves the constant factor in the dominant term of the best known communication complexity. An additional advantage is that our scheme simplifies the existing construction by avoiding the hierarchical design of internally invoking PSM schemes for smaller functions.
Expand
Harish Karthikeyan, Yue Guo, Leo de Castro, Antigoni Polychroniadou, Leo Ardon, Udari Madhushani Sehwag, Sumitra Ganesh, Manuela Veloso
ePrint Report ePrint Report
As AI agents increasingly operate in real-world, multi-agent environments, ensuring reliable and context-aware privacy in agent communication is critical, especially to comply with evolving regulatory requirements. Traditional access controls are insufficient, as privacy risks often arise after access is granted; agents may use information in ways that compromise privacy, such as messaging humans, sharing context with other agents, making tool calls, persisting data, or generating derived private information. Existing approaches often treat privacy as a binary constraint, whether data is shareable or not, overlooking nuanced, role-specific, and computation-dependent privacy needs essential for regulatory compliance.

Agents, including those based on large language models, are inherently probabilistic and heuristic. There is no formal guarantee of how an agent will behave for any query, making them ill-suited for operations critical to security. To address this, we introduce AgentCrypt, a four-tiered framework for fine-grained, encrypted agent communication that adds a protection layer atop any AI agent platform. AgentCrypt spans unrestricted data exchange (Level 1) to fully encrypted computation using techniques such as homomorphic encryption (Level 4). Crucially, it guarantees the privacy of tagged data is always maintained, prioritizing privacy above correctness.

AgentCrypt ensures privacy across diverse interactions and enables computation on otherwise inaccessible data, overcoming barriers such as data silos. We implemented and tested it with Langgraph and Google ADK, demonstrating versatility across platforms. We also introduce a benchmark dataset simulating privacy-critical tasks at all privacy levels, enabling systematic evaluation and fostering the development of regulatable machine learning systems for secure agent communication and computation.
Expand

11 December 2025

University of Vienna, Technical U Vienna, Institute of Science and Technology Austria (ISTA)
Job Posting Job Posting

FARCry (Foundations & Applications of Resource-Restricted Cryptography) is a joint research project by the University of Vienna, Institute of Science and Technology Austria (ISTA), and TU Wien, funded by the Vienna Science and Technology Fund (WWTF) under grant ICT25-081

We invite applications for PhD positions in cryptography, privacy, and provable security. FARCry investigates cryptographic primitives and protocols whose security and privacy rest on bounded computational resources (work, time, space)—including verifiable delay functions (VDFs), proofs of space/work, memory‑hard functions, and privacy‑enhancing applications such as deniable communication and Sybil‑resistance.

Candidates with a strong background in theoretical computer science and/or mathematics are encouraged to apply. For more information, please contact the respective PI directly (with "FARCry [your name]" in the Subject).

The positions start from October 2026. For ISTA, applications go through the graduate school where the deadline is January 8th

Closing date for applications:

Contact:

  • Ass.-Prof. Karen Azari, University of Vienna — karen.azari@univie.ac.at
  • Prof. Krzysztof Pietrzak, ISTA — krzpie@gmail.com
  • Prof. Dominique Schröder, TU Wien — dominique.schroeder@@tuwien.ac.at

More information: https://krzpie.github.io/FARCry/

Expand
University of Luxembourg
Job Posting Job Posting
The University of Luxembourg is an international research university with a distinctly multilingual and interdisciplinary character.

Your role

The successful candidate will join the APSIA Group (Applied Security and Information Assurance), led by Dr. Peter B. Roenne. For further information, you may refer to https://www.uni.lu/snt-en/research-groups/apsia.

The candidate will pursue a doctorate in computer science with a focus on the migration to Post-Quantum Cryptography (PQC) while collaborating with the Ministry for Digitalization in Luxembourg on a joint project. The project aims to provide guidance on the transformation of applications relying on classical cryptography, prone to attacks from quantum computers, to post-quantum cryptography.

Your profile

• Master’s degree in Computer Science, Computer Engineering, Software Engineering, Data Science, Information Systems (Engineering), Mathematics, or related fields with robust mathematical expertise

• Strong programming skills in at least one major programming language

• Good presentation and teamworking skills

• A collaborative team player with a desire to make a personal impact within our interdisciplinary research group 

• The commitment to participate in the design and implementation of high-quality solutions that solve significant problems 

• Self-initiative, creativity, curiosity, flexibility and enthusiasm to work 

We offer

• Multilingual and international character. Modern institution with a personal atmosphere. Staff coming from 90 countries. Member of the “University of the Greater Region” (UniGR)

• A modern and dynamic university. High-quality equipment. Close ties to the business world and to the Luxembourg labour market. A unique urban site with excellent infrastructure

• A partner for society and industry. Cooperation with European institutions, innovative companies, the Financial Centre and with numerous non-academic partners such as ministries, local governments, associations, NGOs …

More info & how to apply http://emea3.mrted.ly/40k7p

Closing date for applications:

Contact: For further information, please contact Peter B. Roenne (peter.roenne@uni.lu).

Expand
Microsoft Research, Redmond
Job Posting Job Posting

Overview Research Internships at Microsoft provide a dynamic environment for research careers with a network of world-class research labs led by globally-recognized scientists and engineers, who pursue innovation in a range of scientific and technical disciplines to help solve complex challenges in diverse fields, including computing, healthcare, economics, and the environment. The researchers and engineers in the Cryptography team pursue challenging research that has an impact at Microsoft and the world at large. Most recently we have focused on cryptographic identity, formally verified cryptography, encrypted communications, verifiable elections, zero-knowledge proofs, and high-performance hardware and software implementations of cryptography. We are spinning up new work related to Artificial Intelligence (AI) and the changes and challenges it brings related to cryptography.

Responsibilities Research Interns put inquiry and theory into practice. Alongside fellow doctoral candidates and some of the world’s best researchers, Research Interns learn, collaborate, and network for life. Research Interns not only advance their own careers, but they also contribute to exciting research and development strides. During the 12-week internship, Research Interns are paired with mentors and expected to collaborate with other Research Interns and researchers, present findings, and contribute to the vibrant life of the community. Research internships are available in all areas of research, and are offered year-round, though they typically begin in the summer.

We are especially interested in applicants with expertise in one or more of the following:

  • Efficient software and hardware cryptographic systems.
  • Fully homomorphic encryption (FHE).
  • Efficient zero-knowledge proofs.
  • Encrypted and authenticated data structures.
  • End-to-end encrypted communications.
  • Formalization and formal verification of cryptography.
  • Verifiable election technologies.
See the full job posting at the provided URL

Closing date for applications:

Contact: Greg Zaverucha (apply at the Microsoft careers website)

More information: https://apply.careers.microsoft.com/careers/job/1970393556640165

Expand
Pedro Branco, Abhishek Jain, Akshayaram Srinivasan
ePrint Report ePrint Report
The last decade has seen remarkable success in designing and uncovering new applications of indistinguishability obfuscation (i$\mathcal{O}$). The main pressing question in this area is whether post-quantum i$\mathcal{O}$ exists. All current lattice-based candidates rely on new, non-standard assumptions, many of which are known to be broken. To make systematic progress on this front, we investigate the following question: can general-purpose i$\mathcal{O}$ be reduced, assuming only learning with errors (LWE), to obfuscating a smaller class of functions? The specific class of functions we consider are {\em pseudorandom functions} (PRFs), which constitute a natural functionality of independent interest. We show the following results: - We construct exponentially-efficient i$\mathcal{O}$ (xi$\mathcal{O}$) for general circuits based on LWE in the pseudorandom oracle model -- a variant of the Random Oracle model (Jain et al., CRYPTO'23). Our construction requires the pseudorandom oracle model heuristic to hold for a specific pseudorandom function and we prove its security against classical adversaries. - We construct (post-quantum) i$\mathcal{O}$ for general circuits in the standard model based on (post-quantum) sub-exponentially secure LWE and (post-quantum) sub-exponentially secure {\em average-case} i$\mathcal{O}$ -- a natural notion of i$\mathcal{O}$ for pseudorandom functions that we define.

To obtain these results, we generalize the ``encrypt-evaluate-decrypt'' paradigm used in prior works by replacing the use of fully homomorphic encryption with succinct secure two-party computation where parties obtain additive output shares (Boyle et al., EUROCRYPT'25 and Abram et al., STOC'25).
Expand
Loris Bergerat, Jean-Baptiste Orfila, Adeline Roux-Langlois, Samuel Tap
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) enables secure computation over encrypted data, offering a breakthrough in privacy-preserving computing. Despite its promise, the practical deployment of FHE has been hindered by the significant computational overhead, especially in general-purpose bootstrapping schemes. In this work, we build upon the recent advancements of [LY23] to introduce a variant of the functional/programmable bootstrapping. By carefully sorting the steps of the blind rotation, we reduce the overall number of external products without compromising correctness. To further enhance efficiency, we propose a novel modulus-switching technique that increases the likelihood of satisfying pruning conditions, reducing computational overhead. Extensive benchmarks demonstrate that our method achieves a speedup ranging from 1.75x to 8.28x compared to traditional bootstrapping and from 1.26x to 2.14x compared to [LY23] bootstrapping techniques. Moreover, we show that this technique is better adapted to the IND-CPA-D security model by reducing the performance downgrade it implies.
Expand
Mathieu Degré, Patrick Derbez, André Schrottenloher
ePrint Report ePrint Report
The meet-in-the-middle (MITM) attack is a powerful cryptanalytic technique leveraging time-memory tradeoffs to break cryptographic primitives. Initially introduced for block cipher cryptanalysis, it has since been extended to hash functions, particularly preimage attacks on AES-based compression functions.

Over the years, various enhancements such as superposition MITM (Bao et al., CRYPTO 2022) and bidirectional propagations have significantly improved MITM attacks, but at the cost of increasing complexity of automated search models. In this work, we propose a unified mixed integer linear programming (MILP) model designed to improve the search for optimal pre-image MITM attacks against AES-based compression functions.

Our model generalizes previous approaches by simplifying both the modeling and the corresponding attack algorithm. In particular, it ensures that all identified attacks are valid. The results demonstrate that our framework not only recovers known attacks on AES and Whirlpool but also discovers new attacks with lower memory complexities, and new quantum attacks.
Expand
◄ Previous Next ►