06 August 2023
Technology Innovation Institute (TII)
Technology Innovation Institute (TII) is a publicly funded research institute, based in Abu Dhabi, United Arab Emirates. It is home to a diverse community of leading scientists, engineers, mathematicians, and researchers from across the globe, transforming problems and roadblocks into pioneering research and technology prototypes that help move society ahead.
Cryptography Research Center
In our connected digital world, secure and reliable cryptography is the foundation of digital information security and data integrity. We address the world’s most pressing cryptographic questions. Our work covers post-quantum cryptography, lightweight cryptography, cloud encryption schemes, secure protocols, quantum cryptographic technologies and cryptanalysis.
Job Description:
We are seeking a skilled and motivated individual to join our team in a hardware engineer internship position with expertise in hardware acceleration. The ideal candidate will have experience working with fully-homomorphic encryption and a strong background on FPGA design for acceleration.
Closing date for applications:
Contact:
Dr. Kashif Nawaz - Director
Kashif.nawaz@tii.ae
04 August 2023
Aikata Aikata, Ahmet Can Mert, Sunmin Kwon, Maxim Deryabin, Sujoy Sinha Roy
Experimental results demonstrate that REED 2.5D integrated circuit consumes 177 mm$^2$ chip area, 82.5 W average power in 7nm technology, and achieves an impressive speedup of up to 5,982$\times$ compared to a CPU (24-core 2$\times$Intel X5690), and 2$\times$ better energy efficiency and 50\% lower development cost than state-of-the-art ASIC accelerator. To evaluate its practical impact, we are the $first$ to benchmark an encrypted deep neural network training. Overall, this work successfully enhances the practicality and deployability of fully homomorphic encryption in real-world scenarios.
Xiaohan Yue, Xue Bi, Haibo Yang, Shi Bai, Yuan He
Joohee Lee, Minju Lee, Jaehui Park
Ivan Damgård, Divya Ravi, Luisa Siniscalchi, Sophia Yakoubov
We determine which notions of secure two-round computation are achievable when the first round is $(t_d, t_m)$-asynchronous, and the second round is over broadcast. Similarly, we determine which notions of secure two-round computation are achievable when the first round is over broadcast, and the second round is (fully) asynchronous. We consider the cases where a PKI is available, when only a CRS is available but private communication in the first round is possible, and the case when only a CRS is available and no private communication is possible before the parties have had a chance to exchange public keys.
Kittiphop Phalakarn, Athasit Surarerks
Nan Wang, Sid Chi-Kin Chau, Dongxi Liu
Bolin Yang, Prasanna Ravi, Fan Zhang, Ao Shen, Shivam Bhasin
Aydin Abadi, Dan Ristea, Steven J. Murdoch
Francesco Berti, Sebastian Faust, Maximilian Orlt
In this work, we follow the approach of Dziembowski et al. and significantly improve its methodology. Concretely, we refine the notion of a leakage diagram via so-called dependency graphs, and show how to use this technique for arbitrary complex circuits via composition results and approximation techniques. To illustrate the power of our new techniques, as a case study, we designed provably secure parallel gadgets for the random probing model, and adapted the ISW multiplication such that all gadgets can be parallelized. Finally, we evaluate concrete security levels, and show how our new methodology can further improve the concrete security level of masking schemes. This results in a compiler provable secure up to a noise level of $ O({1})$ for affine circuits and $ O({1}/{\sqrt{n}})$ in general.
02 August 2023
Syh-Yuan Tan, Ioannis Sfyrakis, Thomas Gross
Minghui Xu, Yihao Guo, Chunchi Liu, Qin Hu, Dongxiao Yu, Zehui Xiong, Dusit Niyato, Xiuzhen Cheng
Huimin Li, Guilherme Perin
In this work, we deploy systematic experiments to investigate the benefits of data augmentation techniques against masked AES implementations when they are also protected with hiding countermeasures. Our results show that, for each countermeasure and dataset, a specific neural network architecture requires a particular data augmentation configuration to achieve significantly improved attack performance. Our results clearly show that data augmentation should be a standard process when targeting datasets with hiding countermeasures in deep learning-based side-channel attacks.
Leonid Azriel, Avi Mendelson
Jonathan Bootle, Kaoutar Elkhiyaoui, Julia Hesse, Yacov Manevich
In this work, we construct the first linkable ring signature with both logarithmic signature size and verification that does not require any trusted mechanism. Our scheme, which relies on discrete-log type assumptions and bilinear maps, improves upon a recent concise ring signature called DualRing by integrating improved preprocessing arguments to reduce the verification time from linear to logarithmic in the size of the ring. Our ring signature allows signatures to be linked based on what message is signed, ranging from linking signatures on any message to only signatures on the same message.
We provide benchmarks for our scheme and prove its security under standard assumptions. The proposed linkable ring signature is particularly relevant to use cases that require privacy-preserving enforcement of threshold policies in a fully decentralized context, and e-voting.
Sebastian Faller, Astrid Ottenhues, Johannes Ernst
Jens Groth, Victor Shoup
For example, we estimate that with a signing committee of 49 parties, at most 16 of which are corrupt, we can generate 50,000 Schnorr signatures per second (assuming each party can dedicate one standard CPU core and 500Mbs of network bandwidth to signing). Importantly, this estimate includes both the cost of an offline precomputation phase (which just churns out message independent "presignatures") and an online signature generation phase. Also, the online signing phase can generate a signature with very little network latency (just one to three rounds, depending on how throughput and latency are balanced).
To achieve this result, we provide two new innovations. One is a new secret sharing protocol (again, asynchronous, robust, optimally resilient) that allows the dealer to securely distribute shares of a large batch of ephemeral secret keys, and to publish the corresponding ephemeral public keys. To achieve better performance, our protocol minimizes public-key operations, and in particular, is based on a novel technique that does not use the traditional technique based on "polynomial commitments". The second innovation is a new algorithm to efficiently combine ephemeral public keys contributed by different parties (some possibly corrupt) into a smaller number of secure ephemeral public keys. This new algorithm is based on a novel construction of a so-called "super-invertible matrix" along with a corresponding highly-efficient algorithm for multiplying this matrix by a vector of group elements.
As protocols for verifiably sharing a secret key with an associated public key and the technology of super-invertible matrices both play a major role in threshold cryptography and multi-party computation, our two new innovations should have applicability well beyond that of threshold Schnorr signatures.
31 July 2023
NUS-Singapore and the University of Sheffield, UK
Closing date for applications:
Contact: Dr Prosanta Gope
30 July 2023
Haochen Sun, Hongyang Zhang
In response to this challenge, we present zkDL, an efficient zero-knowledge proof of deep learning training. At the core of zkDL is zkReLU, a specialized zero-knowledge proof protocol with optimized proving time and proof size for the ReLU activation function, a major obstacle in verifiable training due to its non-arithmetic nature. To integrate zkReLU into the proof system for the entire training process, we devise a novel construction of an arithmetic circuit from neural networks. By leveraging the abundant parallel computation resources, this construction reduces proving time and proof sizes by a factor of the network depth. As a result, zkDL enables the generation of complete and sound proofs, taking less than a minute with a size of less than 20 kB per training step, for a 16-layer neural network with 200M parameters, while ensuring the privacy of data and model parameters.