IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
20 October 2020
Eamonn W. Postlethwaite, Fernando Virdia
ePrint Report
As lattice-based key encapsulation, digital signature, and fully homomorphic encryption schemes near standardisation, ever more focus is being directed to the precise estimation of the security of these schemes. The primal attack reduces key recovery against such schemes to instances of the unique Shortest Vector Problem (uSVP). Dachman-Soled et al. (Crypto 2020) recently proposed a new approach for fine-grained estimation of the cost of the primal attack when using Progressive BKZ for the lattice reduction step. In this paper we review and extend their technique to BKZ 2.0 and provide extensive experimental evidence of its accuracy. Using this technique we also explain results from previous primal attack experiments by Albrecht et al. (Asiacrypt 2017) where attacks succeeded with smaller than expected block sizes. Finally, we use our simulators to reestimate the cost of attacking the three lattice finalists of the NIST Post Quantum Standardisation Process.
Pedro Branco, Nico Döttling, Sihang Pu
ePrint Report
Threshold Private Set Intersection (PSI) allows multiple parties to compute the intersection of their input sets if and only if the intersection is larger than $n-t$, where $n$ is the size of the sets and $t$ is some threshold. The main appeal of this primitive is that, in contrast to standard PSI, known upper-bounds on the communication complexity only depend on the threshold $t$ and not on the sizes of the input sets.
In this work, we present a Threshold PSI scheme for $N$ parties with communication complexity $\tilde O(Nt^2)$. At the heart of our construction is a new cardinality-test protocol, which allows the parties to determine if the intersection of their sets is sufficiently large.
Karim Baghery, Zaira Pindado, Carla Ràfols
ePrint Report
Among various NIZK arguments, zk-SNARKs are the most efficient constructions in terms of proof size and verification which are two critical criteria for large scale applications. Currently, Groths construction, $\mathsf{Groth16}$, from Eurocrypt16 is the most efficient and widely deployed one. However, it is proven to achieve only knowledge soundness, which does not prevent attacks from the adversaries who have seen simulated proofs. There has been considerable progress in modifying $\mathsf{Groth16}$ to achieve simulation extractability to guarantee the non-malleability of proofs. We revise the Simulation Extractable (SE) version of $\mathsf{Groth16}$ proposed by Bowe and Gabizon that has the most efficient prover and $\mathsf{crs}$ size among the candidates, although it adds Random Oracle (RO) to the original construction. We present a new version which requires 4 parings in the verification, instead of 5. We also get rid of the RO at the cost of a collision resistant hash function and a single new element in the $\mathsf{crs}$. Our construction is proven in the $\textit{generic group model}$ and seems to result in the most efficient SE variant of $\mathsf{Groth16}$ in most dimensions.
Kai-Min Chung, Serge Fehr, Yu-Hsuan Huang, Tai-Ning Liao
ePrint Report
We revisit the so-called compressed oracle technique, introduced by Zhandry for analyzing quantum algorithms in the quantum random oracle model (QROM). This technique has proven to be very powerful for reproving known lower bound results, but also for proving new results that seemed to be out of reach before. Despite being very useful, it is however still quite cumbersome to actually employ the compressed oracle technique.
To start off with, we offer a concise yet mathematically rigorous exposition of the compressed oracle technique. We adopt a more abstract view than other descriptions found in the literature, which allows us to keep the focus on the relevant aspects. Our exposition easily extends to the parallel-query QROM, where in each query-round the considered quantum oracle algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis of quantum oracle algorithms.
Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, we show that, for typical examples, the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds.
We demonstrate this on a few examples, recovering known results (like the optimality of parallel Grover), but also obtaining new results (like the optimality of parallel BHT collision search). Our main application is to prove hardness of finding a $q$-chain, i.e., a sequence $x_0,x_1,\ldots,x_q$ with the property that $x_i = H(x_{i-1})$ for all $1 \leq i \leq q$, with fewer than $q$ parallel queries.
The above problem of producing a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete application of our new bound, we prove that the ``Simple Proofs of Sequential Work" proposed by Cohen and Pietrzak remain secure against quantum attacks. Such a proof is not simply a matter of plugging in our new bound; the entire protocol needs to be analyzed in the light of a quantum attack, and substantial additional work is necessary. Thanks to our framework, this can now be done with purely classical reasoning.
To start off with, we offer a concise yet mathematically rigorous exposition of the compressed oracle technique. We adopt a more abstract view than other descriptions found in the literature, which allows us to keep the focus on the relevant aspects. Our exposition easily extends to the parallel-query QROM, where in each query-round the considered quantum oracle algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis of quantum oracle algorithms.
Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, we show that, for typical examples, the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds.
We demonstrate this on a few examples, recovering known results (like the optimality of parallel Grover), but also obtaining new results (like the optimality of parallel BHT collision search). Our main application is to prove hardness of finding a $q$-chain, i.e., a sequence $x_0,x_1,\ldots,x_q$ with the property that $x_i = H(x_{i-1})$ for all $1 \leq i \leq q$, with fewer than $q$ parallel queries.
The above problem of producing a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete application of our new bound, we prove that the ``Simple Proofs of Sequential Work" proposed by Cohen and Pietrzak remain secure against quantum attacks. Such a proof is not simply a matter of plugging in our new bound; the entire protocol needs to be analyzed in the light of a quantum attack, and substantial additional work is necessary. Thanks to our framework, this can now be done with purely classical reasoning.
Ritam Bhaumik, Xavier Bonnetain, André Chailloux, Gaëtan Leurent, María Naya-Plasencia, André Schrottenloher, Yannick Seurin
ePrint Report
It was long thought that symmetric cryptography was only mildly affected by quantum attacks, and that doubling the key length was sufficient to restore security. However, recent works have shown that Simon's quantum period finding algorithm breaks a large number of MAC and authenticated encryption algorithms when the adversary can query the MAC/encryption oracle with a quantum superposition of messages. In particular, the OCB authenticated encryption mode is broken in this setting, and no quantum-secure mode is known with the same efficiency (rate-one and parallelizable).
In this paper we generalize the previous attacks, show that a large class of OCB-like schemes is unsafe against superposition queries, and discuss the quantum security notions for authenticated encryption modes. We propose a new rate-one parallelizable mode named QCB inspired by TAE and OCB and prove its security against quantum superposition queries.
In this paper we generalize the previous attacks, show that a large class of OCB-like schemes is unsafe against superposition queries, and discuss the quantum security notions for authenticated encryption modes. We propose a new rate-one parallelizable mode named QCB inspired by TAE and OCB and prove its security against quantum superposition queries.
19 October 2020
Joppe W. Bos, Joost Renes, Christine van Vredendaal
ePrint Report
We discuss how to efficiently utilize contemporary co-processors used for
public-key cryptography for polynomial multiplication in lattice-based cryptography.
More precisely, we focus on polynomial multiplication in rings of the form
$Z[X]/(X^n + 1)$. We make use of the roots of unity in this ring and construct the
Kronecker+ algorithm, a variant of Nussbaumer designed to combine especially well
with the Kronecker substitution method, This is a symbolic NTT of variable depth
that can be considered a generalization of Harveys multipoint Kronecker substitution.
Compared to straightforwardly combining Kronecker substitution with the state-ofthe-
art symbolic NTT algorithms Nussbaumer or Schönhage-Strassen, we halve the
number or the bit size of the multiplied integers, respectively. Kronecker+ directly
applies to the polynomial multiplication operations in the lattice-based cryptographic
schemes Kyber and Saber, and we provide a detailed implementation analysis for
the latter. This analysis highlights the potential of the Kronecker+ technique for
commonly available multiplier lengths on contemporary co-processors.
İrem Keskinkurt Paksoy, Murat Cenk
ePrint Report
Lattice-based NIST PQC finalists need efficient multiplication in $\mathbb{Z}_q[x]/(f(x))$. Multiplication in this ring can be performed very efficiently via number theoretic transform (NTT) as done in CRYSTALS-Kyber if the parameters of the scheme allow it. If NTT is not supported, other multiplication algorithms must be employed. For example, if the modulus $q$ of the scheme is a power of two as in Saber and NTRU, then NTT can not be used directly. In this case, Karatsuba and Toom-Cook methods together with modular reduction are commonly used for multiplication in this ring. In this paper, we show that the Toeplitz matrix-vector product (TMVP) representation of modular polynomial multiplication yields better results than Karatsuba and Toom-Cook methods. We present three- and four-way TMVP formulas that we derive from three- and four-way Toom-Cook algorithms, respectively. We use the four-way TMVP formula to develop an algorithm for multiplication in the ring $\mathbb{Z}_{2^m}[x]/(x^{256}+1)$. We implement the proposed algorithm on the ARM Cortex-M4 microcontroller and apply it to Saber, which is one of the lattice-based finalists of the NIST PQC competition. We compare the results to previous implementations. The TMVP-based multiplication algorithm we propose is $20.83\%$ faster than the previous algorithm that uses a combination of Toom-Cook, Karatsuba, and schoolbook methods. Our algorithm also speeds up key generation, encapsulation, and decapsulation algorithms of all variants of Saber. The speedups vary between $4.3-39.8\%$. Moreover, our algorithm requires less memory than the others, except for the memory-optimized implementation of Saber.
Nils Fleischhacker, Mark Simkin
ePrint Report
Robust property-preserving hash (PPH) functions, recently introduced by Boyle, Lavigne, and Vaikuntanathan [ITCS 2019], compress large inputs $x$ and $y$ into short digests $h(x)$ and $h(y)$ in a manner that allows for computing a predicate $P$ on $x$ and $y$ while only having access to the corresponding hash values. In contrast to locality-sensitive hash functions, a robust PPH function guarantees to correctly evaluate a predicate on $h(x)$ and $h(y)$ even if $x$ and $y$ are chosen adversarially \emph{after} seeing $h$.
Our main result is a robust PPH function for the exact hamming distance predicate \[ \mathsf{HAM}^t(x, y) = \begin{cases} 1 &\text{if } d( x, y) \geq t \\ 0 & \text{Otherwise}\\ \end{cases} \] where $d(x, y)$ is the hamming-distance between $x$ and $y$. Our PPH function compresses $n$-bit strings into $\mathcal{O}(t \lambda)$-bit digests, where $\lambda$ is the security parameter. The construction is based on the q-strong bilinear discrete logarithm assumption.
Along the way, we construct a robust PPH function for the set intersection predicate \[ \mathsf{INT}^t(X, Y) = \begin{cases} 1 &\text{if } \vert X \cap Y\vert > n - t \\ 0 & \text{Otherwise}\\ \end{cases} \] which compresses sets $X$ and $Y$ of size $n$ with elements from some arbitrary universe $U$ into $\mathcal{O}(t\lambda)$-bit long digests. This PPH function may be of independent interest. We present an almost matching lower bound of $\Omega(t \log t)$ on the digest size of any PPH function for the intersection predicate, which indicates that our compression rate is close to optimal. Finally, we also show how to extend our PPH function for the intersection predicate to more than two inputs.
Our main result is a robust PPH function for the exact hamming distance predicate \[ \mathsf{HAM}^t(x, y) = \begin{cases} 1 &\text{if } d( x, y) \geq t \\ 0 & \text{Otherwise}\\ \end{cases} \] where $d(x, y)$ is the hamming-distance between $x$ and $y$. Our PPH function compresses $n$-bit strings into $\mathcal{O}(t \lambda)$-bit digests, where $\lambda$ is the security parameter. The construction is based on the q-strong bilinear discrete logarithm assumption.
Along the way, we construct a robust PPH function for the set intersection predicate \[ \mathsf{INT}^t(X, Y) = \begin{cases} 1 &\text{if } \vert X \cap Y\vert > n - t \\ 0 & \text{Otherwise}\\ \end{cases} \] which compresses sets $X$ and $Y$ of size $n$ with elements from some arbitrary universe $U$ into $\mathcal{O}(t\lambda)$-bit long digests. This PPH function may be of independent interest. We present an almost matching lower bound of $\Omega(t \log t)$ on the digest size of any PPH function for the intersection predicate, which indicates that our compression rate is close to optimal. Finally, we also show how to extend our PPH function for the intersection predicate to more than two inputs.
Yunhao Zhang, Srinath Setty, Qi Chen, Lidong Zhou, Lorenzo Alvisi
ePrint Report
The specific order of commands agreed upon when running state machine replication (SMR) is immaterial to fault-tolerance: all that is required is for all correct deterministic replicas to follow it. In the permissioned blockchains that rely on Byzantine fault tolerant (BFT) SMR, however, nodes have a stake in the specific sequence that ledger records, as well as in preventing other parties from manipulating the sequencing to their advantage. The traditional specification of SMR correctness, however, has no language to express these concerns. This paper introduces Byzantine ordered consensus, a new primitive that augments the correctness specification of BFT SMR to include specific guarantees on the total orders it produces; and a new architecture for BFT SMR that, by factoring out ordering from consensus, can enforce these guarantees and prevent Byzantine nodes from controlling ordering decisions (a Byzantine oligarchy). These contributions are instantiated in Pompe, a BFT SMR protocol that is guaranteed to order commands in a way that respects a natural extension of linearizability.
Yunxiu Ye, Zhenfu Cao, Jiachen Shen
ePrint Report
Attribute-based encryption received widespread attention as soon as it was proposed. However, due to its specific characteristics, some restrictions on attribute set in attribute-based encryption are not flexible enough in actual operation. In addition, since access authorities are determined according to users' attributes, users sharing the same attributes are difficult to be distinguished. Once a malicious user makes illicit gains by their decryption authorities, it is difficult to track down specific users. This paper follows practical demands to propose a more flexible key-policy attribute-based encryption scheme with black-box traceability. The scheme has a constant size of public parameters which can be utilized to construct attribute-related parameters flexibly, and the method of traitor tracing in broadcast encryption is introduced to achieve effective malicious user tracing. In addition, the security and feasibility can be proved by the security proofs and performance evaluation in this paper.
Enis Ulqinaku, Hala Assal, AbdelRahman Abdou, Sonia Chiasson, Srdjan Čapkun
ePrint Report
FIDO's Universal-2-Factor (U2F) is a web-authentication mechanism designed to provide resilience to real-time phishinga class of attacks that undermines multi-factor authentication by allowing an attacker to relay second-factor one-time tokens from the victim user to the legitimate website in real-time. A U2F dongle is simple to use, and is designed to ensure users have complete mental models of proper usage. We show that social engineering attacks allow an adversary to downgrade FIDOs U2F to alternative authentication mechanisms. Websites allow such alternatives to handle dongle malfunction or loss. All FIDO-supporting wesbites in Alexa's top 100 allow choosing alternatives to FIDO, and are thus vulnerable to real-time phishing attacks. We crafted a phishing website that mimics Google logins page and implements a FIDO-downgrade attack. We then ran a carefully-designed user study to test the effect on users. We found that, while registering FIDO as their second authentication factor, 55 % of participants fell for real-time phishing, and another 35% would potentially be susceptible to the attack in practice.
Lauren De Meyer, Elke De Mulder, Michael Tunstall
ePrint Report
There are many examples of how to assess the side-channel resistance of a hardware implementation for a given order, where one has to take into account all transitions and glitches produced by a given design. However, microprocessors do not conform with the ideal circuit model which is typically used to gain confidence in the security of masking against side-channel attacks. As a result, masked software implementations in practice do not exhibit the security one would expect in theory.
In this paper, we generalize and extend work by Papagiannopoulos and Veshchikov to describe the ways in which a microprocessor may leak. We show that the sources of leakage are far more numerous than previously considered and highly dependent on the platform. We further describe how to write high-level code in the C programming language that allows one to work around common micro-architectural features. In particular, we introduce implementation techniques to reduce sensitive combinations made by the CPU and which are devised so as to be preserved through the optimizations made by the compiler. However, these techniques cannot be proven to be secure. In this paper, we seek to highlight leakage not considered in current models used in proofs and describe some potential solutions. We apply our techniques to two case studies (DES and AES) and show that they are able to provide a modest level of security on several platforms.
Gustavo Banegas, Daniel J. Bernstein, Iggy van Hoof, Tanja Lange
ePrint Report
This paper analyzes and optimizes quantum circuits for computing discrete logarithms
on binary elliptic curves, including reversible circuits
for fixed-base-point scalar multiplication and the full stack of relevant subroutines.
The main optimization target is the size of the quantum computer,
i.e., the number of logical qubits required, as this appears to be the main obstacle to implementing
Shor's polynomial-time discrete-logarithm algorithm. The secondary optimization target
is the number of logical Toffoli gates.
For an elliptic curve over a field of 2^n elements, this paper reduces the number of qubits to 7n+[log_2(n)]+9. At the same time this paper reduces the number of Toffoli gates to 48n^3+8n^(log_2(3)+1)+352n^2 log_2(n)+512n^2+O(n^(log_2(3))) with double-and-add scalar multiplication, and a logarithmic factor smaller with fixed-window scalar multiplication. The number of CNOT gates is also O(n^3). Exact gate counts are given for various sizes of elliptic curves currently used for cryptography.
For an elliptic curve over a field of 2^n elements, this paper reduces the number of qubits to 7n+[log_2(n)]+9. At the same time this paper reduces the number of Toffoli gates to 48n^3+8n^(log_2(3)+1)+352n^2 log_2(n)+512n^2+O(n^(log_2(3))) with double-and-add scalar multiplication, and a logarithmic factor smaller with fixed-window scalar multiplication. The number of CNOT gates is also O(n^3). Exact gate counts are given for various sizes of elliptic curves currently used for cryptography.
Arne Deprez, Elena Andreeva, Jose Maria Bermudo Mera, Angshuman Karmakar, Antoon Purnal
ePrint Report
In this work we develop optimized software implementationsfor ForkAE, a second round candidate in the ongoing NIST lightweight cryptography standardization process. Moreover, we analyze the perfor-mance and efficiency of different ForkAE implementations on two em-bedded platforms: ARM Cortex-A9 and ARM Cortex-M0.First, we study portable ForkAE implementations. We apply a decryption optimization technique which allows us to accelerate decryption by up to 35%.
Second, we go on to explore platform-specific software op-timizations. In platforms where cache-timing attacks are not a risk, we present a novel table-based approach to compute the SKINNY round function. Compared to the existing portable implementations, this technique speeds up encryption and decryption by 20% and 25%, respectively. Additionally, we propose a set of platform-specific optimizations for processors with parallel hardware extensions such as ARM NEON. Without relying on parallelism provided by long messages (cf. bit-sliced implementations), we focus on the primitive-level ForkSkinny parallelism provided by ForkAE to reduce encryption and decryption latency by up to 30%. We benchmark the performance of our implementations on the ARM Cortex-M0 and ARM Cortex-A9 processors and give a comparison withthe other SKINNY-based schemes in the NIST lightweight competition SKINNY-AEAD and Romulus
Barbara Gigerl, Vedad Hadzic, Robert Primas, Stefan Mangard, Roderick Bloem
ePrint Report
The protection of cryptographic implementations against power analysis attacks is of critical importance for many applications in embedded systems. The typical approach of protecting against these attacks is to implement algorithmic countermeasures, like masking. However, implementing these countermeasures in a secure and correct manner is challenging. Masking schemes require the independent processing of secret shares, which is a property that is often violated by CPU microarchitectures in practice. In order to write leakage-free code, the typical approach in practice is to iteratively explore instruction sequences and to empirically verify whether there is leakage caused by the hardware for this instruction sequence or not. Clearly, this approach is neither efficient, nor does it lead to rigorous security statements.
In this paper, we overcome the current situation and present the first approach for co-design and co-verification of masked software implementations on CPUs. First, we present Coco, a tool that allows us to provide security proofs at the gate-level for the execution of a masked software implementation on a concrete CPU. Using Coco , we analyze the popular 32-bit RISC-V Ibex core, identify all design aspects that violate the security of our tested masked software implementations and perform corrections, mostly in hardware. The resulting secured Ibex core has an area overhead around 10%, the runtime of software on this core is largely unaffected, and the formal verification with Coco of an, e.g., first-order masked Keccak S-box running on the secured Ibex core takes around 156 seconds. To demonstrate the effectiveness of our suggested design modifications, we perform practical leakage assessments using an FPGA evaluation board.
In this paper, we overcome the current situation and present the first approach for co-design and co-verification of masked software implementations on CPUs. First, we present Coco, a tool that allows us to provide security proofs at the gate-level for the execution of a masked software implementation on a concrete CPU. Using Coco , we analyze the popular 32-bit RISC-V Ibex core, identify all design aspects that violate the security of our tested masked software implementations and perform corrections, mostly in hardware. The resulting secured Ibex core has an area overhead around 10%, the runtime of software on this core is largely unaffected, and the formal verification with Coco of an, e.g., first-order masked Keccak S-box running on the secured Ibex core takes around 156 seconds. To demonstrate the effectiveness of our suggested design modifications, we perform practical leakage assessments using an FPGA evaluation board.
Lichao Wu, Guilherme Perin, Stjepan Picek
ePrint Report
Deep learning-based SCA represents a powerful option for profiling side-channel analysis. Numerous results in the last few years indicate neural networks can break targets protected with countermeasures even with a relatively small number of attack traces. Intuitively, the more powerful neural network architecture we require, the more effort we need to spend in its hyperparameter tuning. Current results commonly use random search and reach good performance. Yet, we remain with the question of how good are such architectures if compared with the architectures that are carefully designed by following a specific methodology. Unfortunately, the works considering methodologies are sparse and difficult to ease without prior knowledge about the target.
This work proposes an automated way for deep learning hyperparameter tuning that is based on Bayesian Optimization. We build a custom framework denoted as AutoSCA that supports both machine learning and side-channel metrics. Our experimental analysis shows that Bayesian optimization performs well regardless of the dataset, leakage model, or neural network type. What is more, we find a number of neural network architectures outperforming state-of-the-art attacks. Finally, we note that random search, despite being considered not particularly powerful, manages to reach top performance for a number of considered settings. We postulate this happens since the datasets are relatively easy to break, and there are many neural network architectures reaching top performance.
This work proposes an automated way for deep learning hyperparameter tuning that is based on Bayesian Optimization. We build a custom framework denoted as AutoSCA that supports both machine learning and side-channel metrics. Our experimental analysis shows that Bayesian optimization performs well regardless of the dataset, leakage model, or neural network type. What is more, we find a number of neural network architectures outperforming state-of-the-art attacks. Finally, we note that random search, despite being considered not particularly powerful, manages to reach top performance for a number of considered settings. We postulate this happens since the datasets are relatively easy to break, and there are many neural network architectures reaching top performance.
Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Enoch Peserico, Elaine Shi
ePrint Report
An oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (STOC '87 and J. ACM '96), is a technique for hiding RAM's access pattern. That is, for every input the distribution of the observed locations accessed by the machine is essentially independent of the machine's secret inputs. Recent progress culminated in a work of Asharov et al. (EUROCRYPT '20), obtaining an ORAM with (amortized) logarithmic overhead in total work, which is known to be optimal.
Oblivious Parallel RAM (OPRAM) is a natural extension of ORAM to the (more realistic) parallel setting where several processors make concurrent accesses to a shared memory. It is known that any OPRAM must incur logarithmic work overhead and for highly parallel RAMs a logarithmic depth blowup (in the balls and bins model). Despite the significant recent advances, there is still a large gap: all existing OPRAM schemes incur a poly-logarithmic overhead either in total work or in depth.
Our main result closes the aforementioned gap and provides an essentially optimal OPRAM scheme. Specifically, assuming one-way functions, we show that any Parallel RAM with memory capacity~$N$ can be obliviously simulated in space $O(N)$, incurring only $O(\log N)$ blowup in (amortized) total work as well as in depth. Our transformation supports all PRAMs in the CRCW mode and the resulting simulation is in the CRCW mode as well.
Oblivious Parallel RAM (OPRAM) is a natural extension of ORAM to the (more realistic) parallel setting where several processors make concurrent accesses to a shared memory. It is known that any OPRAM must incur logarithmic work overhead and for highly parallel RAMs a logarithmic depth blowup (in the balls and bins model). Despite the significant recent advances, there is still a large gap: all existing OPRAM schemes incur a poly-logarithmic overhead either in total work or in depth.
Our main result closes the aforementioned gap and provides an essentially optimal OPRAM scheme. Specifically, assuming one-way functions, we show that any Parallel RAM with memory capacity~$N$ can be obliviously simulated in space $O(N)$, incurring only $O(\log N)$ blowup in (amortized) total work as well as in depth. Our transformation supports all PRAMs in the CRCW mode and the resulting simulation is in the CRCW mode as well.
18 October 2020
Intel Corp.
Job Posting
Job Description
As a Principal Engineer/Cryptographer at Intel, you will lead complex, multi-disciplinary projects to advance cutting-edge applied cryptography within Intel's chips. The ideal candidate is comfortable implementing multiple types of cryptographic algorithms and able to research new constructions and guide hardware, firmware and software teams. The candidate should be experienced in cryptographic solution formulation and problem solving.
Minimum work experience requirements:
10+ years experience in cryptography/cryptographic implementation and an advanced degree in cryptography or related discipline; or 15+ years experience in cryptography/cryptographic implementation
Preferred qualifications:
Responsibilities include the following:
- Drive a specific strategic cryptographic initiative across Intel
- Collaborate with internal stakeholders to contribute to other strategic objectives of Intel's Cryptography team
- Influence internal security policies and standards regarding cryptography and security
- Collaborate with other team members on internal research activities
- Perform cryptanalytical reviews of algorithms, protocols and implementations
- Track relevant state-of-the-art academic cryptographic research
- Guide and mentor the develop of junior engineers in the technical leadership pipeline
Minimum work experience requirements:
10+ years experience in cryptography/cryptographic implementation and an advanced degree in cryptography or related discipline; or 15+ years experience in cryptography/cryptographic implementation
Preferred qualifications:
- Experience with industrial security engineering, preferably significant contributions to large projects
- Knowledge of computer architecture, CPU, SoC, chipsets, BIOS, Firmware, Drivers, and other compute paradigms
- Very good understanding of side-channel attacks, including architectural and physical attacks
- Familiarity with hardware design toolsets, including RTL
- Familiarity and experience with software languages (C, C++, Java, Python, Go, etc.)
- Strong track record of contributions to the crypto community (papers in well-established conferences, patents, standards)
- Familiarity with latest developments in the area of post-quantum algorithms
Contact: David Wheeler (david.m.wheeler@intel.com)
More information: https://jobs.intel.com/ShowJob/Id/2616826/Principal-Engineer-Senior-Cryptographer
Centre for Wireless Communications, University of Oulu, Finland
Job Posting
We are looking for a PhD student and a Postdoc to work on Security of Beyond 5G networks under INSPIRE-5Gplus and 6G Flagship projects.
The PhD student position is open for students with a Masters Degree in EE or CS fields.
More info about Projects:
INSPIRE-5Gplus: https://www.inspire-5gplus.eu/
6G Flagship: https://www.oulu.fi/6gflagship/
If you are interested, Please Contact: madhusanka.liyanage@oulu.fi or Pawani.porambage@oulu.fi
Closing date for applications:
Contact: Madhusanka Liyanage (madhusanka.liyanage@oulu.fi)
More information: https://sites.google.com/view/madhusanka/home
CISPA Helmholtz Center for Information Security
Job Posting
Dr. Yang Zhang (https://yangzhangalmo.github.io/) at CISPA Helmholtz Center for Information Security (Germany) is looking for several fully-funded Ph.D. students working on the following topics:
We also have positions for postdocs, if you are interested, please send an email with your CV to zhang@cispa.de as well.
- Machine learning security and privacy
- Biomedical privacy
- Misinformation detection
- A bachelor/master degree in Computer Science, Information Security, or Mathematics
- Excellent English (knowledge of German is not required)
- Good knowledge about machine learning/data mining
- Excellent programming skills
- Full-time working contract (12-month E13-level salary, ~2,400 euros per month)
- Excellent research environment
- Strong supervision
- CV
- Transcript
We also have positions for postdocs, if you are interested, please send an email with your CV to zhang@cispa.de as well.
Closing date for applications:
Contact: Yang Zhang