01 February 2021
Kelesidis Evgnosia-Alexandra
Kenji Yasunaga
Amin Rezaei, Hai Zhou
Sara Ricci, Lukas Malina, Petr Jedlicka, David Smekal, Jan Hajny, Petr Cibik, Patrik Dobias
Seny Kamara, Tarik Moataz, Andrew Park, Lucy Qin
In this work, we translate the high-level vision of the proposed legislation into technical requirements and design a cryptographic protocol that meets them. Roughly speaking, the protocol can be viewed as a decentralized system of locally-managed end-to-end encrypted databases. Our design relies on various cryptographic building blocks including structured encryption, secure multi-party computation and secret sharing. We propose a formal security definition and prove that our design meets it. We implemented our protocol and evaluated its performance empirically at the scale it would have to run if it were deployed in the United States. Our results show that a decentralized and end-to-end encrypted national gun registry is not only possible in theory but feasible in practice.
30 January 2021
Abu Dhabi, United Arab Emirates, 28 June - 1 July 2021
Submission deadline: 18 March 2021
Notification: 29 April 2021
University of Twente, The Netherlands
The Services and Cybersecurity (SCS) group at the University of Twente invites applications for a 4-years PhD position on the topic of 'cryptographic protocols for privacy-preserving machine learning'.
We are looking for candidates with a strong background in (applied) cryptography.
More information:
https://www.utwente.nl/en/organisation/careers/!/2021-218/phd-position-on-cryptographic-protocols-for-privacy-preserving-machine-learning
Deadline for applications: 11 February 2021, 23:59 CET
Closing date for applications:
Contact: Prof. Dr. Andreas Peter (a.peter@utwente.nl)
More information: https://www.utwente.nl/en/organisation/careers/!/2021-218/phd-position-on-cryptographic-protocols-for-privacy-preserving-machine-learning
29 January 2021
We welcome nominations for the 2021 award (for papers published in 2006) until Feb 20, 2021. The proceedings of these conferences can be found here: To submit your nomination please send an email to testoftime@iacr.org
More information about the IACR Test-of-Time awards can be found in iacr.org/testoftime/
The 2021 Selection Committee:
- Ueli Maurer (chair)
- Nigel Smart
- Francois-Xavier Standaert (Eurocrypt 2021 program co-chair)
- Chris Peikert (Crypto 2021 program co-chair)
- Mehdi Tibouchi (Asiacrypt 2021 program co-chair)
28 January 2021
Aram Jivanyan, Jesse Lancaster, Arash Afshar, Parnian Alimi
Majid Salimi
Shivam Bhasin, Jan-Pieter D'Anvers, Daniel Heinz, Thomas Pöppelmann, Michiel Van Beirendonck
Elena Andreeva, Amit Singh Bhati, Damian Vizar
RUP security is a particularly relevant security target for lightweight (LW) implementations of AE schemes on memory-constrained devices or devices with stringent real-time requirements. Surprisingly, very few NIST lightweight AEAD candidates come with any provable guarantees against RUP. In this work, we show that the SAEF mode of operation of the ForkAE family comes with integrity guarantees in the RUP setting. The RUP integrity (INT-RUP) property was defined by Andreeva et~al.~in Asiacrypt'14. Our INT-RUP proof is conducted using the coefficient H technique and it shows that, without any modifications, SAEF is INT-RUP secure up to the birthday bound, i.e., up to $2^{n/2}$ processed data blocks, where $n$ is the block size of the forkcipher. The implication of our work is that SAEF is indeed RUP secure in the sense that the release of unverified plaintexts will not impact its ciphertext integrity.
27 January 2021
Riverside Research, Open Innovation Center, Beavercreek, OH
Closing date for applications:
Contact: Eileen Norton, Sr. Recruiter, Riverside Research, enorton@riversideresearch.org Dr. Michael Clark, Associate Director, Trusted and Resilient Systems, Riverside Research Open Innovation Center, IACR Member
More information: https://boards.greenhouse.io/riversideresearch/jobs/4347155003
Zcash Foundation
We’re looking for someone who is as excited as we are about building private financial infrastructure for the public good, and we take that task very seriously.
The role as a cryptography engineer within the core Zcash Foundation team will be responsible for building cryptographic protocols as well as distributed systems. The ideal candidate embodies the Foundation’s values, while fully aligning with its mission and goals.
Engineers at the Zcash Foundation are responsible for implementing the core Zcash protocol, maintaining deployed software, fixing bugs, and identifying improvements to the protocol for the future. Other duties include writing about our work and interfacing with external stakeholders such as those who use our software and interoperable implementations of the Zcash protocol. The position reports to the Zcash Foundation’s engineering manager.
Zcash Foundation Core Engineering Projects: Currently the engineering team is working on Zebra, an independent implementation of the Zcash protocol written in Rust, and soon we will dedicate resources to building out Zcash wallet functionality.
Closing date for applications:
Contact: Submit application here: https://docs.google.com/forms/d/e/1FAIpQLSelpDkmqjgVgiTfVFukB9TbIoIExWxVDHn0VvnSboO4nJIN1A/viewform
More information: https://www.zfnd.org/blog/open-position-cryptography-engineer/
Cryptanalysis Taskforce @ Nanyang Technological University, Singapore
- tool aided cryptanalysis, such as MILP, CP, STP, and SAT
- machine learning aided cryptanalysis and designs
- privacy-preserving friendly symmetric-key designs
- quantum cryptanalysis
- theory and proof
- cryptanalysis against SHA-2, SHA-3, and AES
Closing date for applications:
Contact: Asst Prof. Jian Guo, guojian@ntu.edu.sg
More information: http://team.crypto.sg
Qualcomm, Sophia Antipolis (France)
Closing date for applications:
Contact: avial@qti.qualcomm.com
More information: https://qualcomm.wd5.myworkdayjobs.com/External/job/Sophia-Antipolis/Crypto-Expert---Sophia-Antipolis--France_3004178
Madalina Chirita, Alexandru-Mihai Stroie, Andrei-Daniel Safta, Emil Simion
Daniel Heinz, Thomas Pöppelmann
Sourav Das, Vinith Krishnan, Irene Miriam Isaac, Ling Ren
Melissa Chase, Esha Ghosh, Saeed Mahloujifar
Previous work on poisoning attacks focused on trying to decrease the accuracy of models either on the whole population or on specific sub-populations or instances. Here, for the first time, we study poisoning attacks where the goal of the adversary is to increase the information leakage of the model. Our findings suggest that poisoning attacks can boost the information leakage significantly and should be considered as a stronger threat model in sensitive applications where some of the data sources may be malicious.
We first describe our property inference poisoning attack that allows the adversary to learn the prevalence in the training data of any property it chooses: it chooses the property to attack, then submits input data according to a poisoned distribution, and finally uses black box queries (label-only queries) on the trained model to determine the frequency of the chosen property. We theoretically prove that our attack can always succeed as long as the learning algorithm used has good generalization properties.
We then verify effectiveness of our attack by experimentally evaluating it on two datasets: a Census dataset and the Enron email dataset. In the first case we show that classifiers that recognizes whether an individual has high income (Census data) also leak information about the race and gender ratios of the underlying dataset. In the second case, we show classifiers trained to detect spam emails (Enron data) can also reveal the fraction of emails which show negative sentiment (according to a sentiment analysis algorithm); note that the sentiment is not a feature in the training dataset, but rather some feature that the adversary chooses and can be derived from the existing features (in this case the words). Finally, we add an additional feature to each dataset that is chosen at random, independent of the other features, and show that the classifiers can also be made to leak statistics about this feature; this shows that the attack can target features completely uncorrelated with the original training task. We were able to achieve above $90\%$ attack accuracy with $9-10\%$ poisoning in all of these experiments.