IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
14 June 2021
Clearmatics Technologies
Job PostingClearmatics is a protocol engineering company who are building a new financial market architecture that is more open, fair, and resilient than the legacy systems that are in use today.
We are looking for a cryptography engineer, who loves challenges and has a research-oriented mindset. You cultivate an adversarial attitude, always thinking ahead and trying to exploit the code you write and protocols you design. You always look forward to making your code more efficient without compromising on readability and robustness. Code quality and consistency mean everything to you.
Above all, you are a truth seeker and an outstanding communicator/listener not afraid of challenging (and being challenged by) your peers. Unconditional lover of simplicity and robustness, you never miss an opportunity to learn and extend your knowledge basis.
RESPONSIBILITIES
- Assist in the design of cryptographic protocols.
- Collaborate with your colleagues on the implementation of cryptographic primitives and protocols.
- Produce technical design specifications.
- Produce externally facing artefacts (e.g. blog posts, papers, documentation excerpts etc.)
- Support research colleagues in conducting their research.
- Interface with the Engineering team to ease the transition of the research pieces of code into robust production software fully integrated with our stack.
- Keep up with new research in the space.
REQUIREMENTS
- Fluency in English (written and spoken).
- Background in applied Computer Science.
- Experience with system programming (C/C++/Rust).
- Strong applied cryptography skills (experience implementing robust elliptic curve cryptography).
- Outstanding algorithmic thinking.
- Strong focus on code quality/documentation and simplicity.
Nice to haves
- Knowledge of Unix and bash.
- Experience with constant time cryptography.
- Experience with cryptography on embedded systems.
- Experience with Ethereum or other blockchain projects.
- Experience contributing to open-source cryptography libraries.
- Experience with Pyt
Closing date for applications:
Contact: Stephanie Hawkes - stephanie.hawkes@clearmatics.com Or apply via: https://boards.greenhouse.io/clearmatics/jobs/5326634002
More information: https://boards.greenhouse.io/clearmatics/jobs/5326634002
University of Stuttgart, Institute of Information Security
Job Postingfully-funded Postdoc position.
The successful candidate is expected to work on tool-supported formal analysis of cryptographic protocols and web applications building, among others, on our work published at EuroS&P 2021, S&P 2019, CSF 2017, CCS 2016, CCS 2015, ESORICS 2015, and S&P 2014. One goal is to provide tool-supported security analysis based on DY* for our web infrastructure model (WIM).
The position is available immediately with an internationally competitive salary (German public salary scale TV-L E13 or TV-L E14, depending on the candidate's qualification).
The Institute of Information Security offers a creative international environment for top-level international research in Germany's high-tech region.
The successful candidate should have a Ph.D. (or should be very close to completion thereof) in Computer Science, Mathematics, Information Security, or a related field. We value strong analytical skills. Knowledge in one or more of the following fields is an asset:
- Formal Methods (Verification, Theorem Proving, F*, Type Checking, etc.)
- Cryptographic Protocol Analysis
- Web Security
The deadline for applications is
July 4th, 2021.
Late applications will be considered until the position is filled. See the official job offering for details of how to apply.Closing date for applications:
Contact: Prof. Dr. Ralf Küsters
Institute of Information Security
University of Stuttgart
Germany
https://sec.uni-stuttgart.de
More information: https://www.sec.uni-stuttgart.de/institute/job-openings/
Adi Akavia, Margarita Vald
ePrint ReportWe show that the Li-Micciancio attack is only the tip of the iceberg: 1)We exhibit an input-recovery attack completely breaking the privacy of a wide and natural family of HE-based protocols, including protocols using only exact HE-schemes and with an adversary exposed solely to encrypted data. This proves that cpa-security is insufficient to ensure privacy in a much broader context than previously known. 2)To address the threat exhibited by our attack we introduce sufficient conditions, on either the encryption scheme or the protocol, that do guarantee privacy: (a) Every HE scheme with a sanitization algorithm (e.g., BGV and FHEW) can be transformed into a ``sanitized" scheme so that protocols instantiated with it preserve privacy against malicious adversaries. (b) Moreover, we characterize a natural sub-family of these protocols for which cpa-security does suffice to guarantee privacy, albeit against semi-honest adversaries.
To prove (2a) we define a notion of circuit-privacy+ that lies between semi-honest and malicious circuit-privacy and realize it from existing schemes; this may be of independent interest.
Divesh Aggarwal, Eldon Chung, Maciej Obremski, Joao Ribeiro
ePrint ReportMotivated by this, 15 years ago, Bosley and Dodis asked whether it is even possible to build 2-out-of-2 secret sharing without access to uniform randomness. In this work, we make progress towards resolving this question.
We answer this question for secret sharing schemes with important additional properties, i.e., either leakage-resilience or non-malleability. We prove that, unfortunately, for not too small secrets, it is impossible to construct any of 2-out-of-2 leakage-resilient secret sharing or 2-out-of-2 non-malleable secret sharing without access to uniform randomness.
Given that the problem whether 2-out-of-2 secret sharing requires uniform randomness has been open for a long time, it is reasonable to consider intermediate problems towards resolving the open question. In a spirit similar to NP-completeness, we study how the existence of a t-out-of-n secret sharing without access to uniform randomness is related to the existence of a t'-out-of-n' secret sharing without access to uniform randomness for a different choice of the parameters t,n,t',n'.
Mohammad Hassan Ameri, Alexander R. Block, Jeremiah Blocki
ePrint ReportWe give two applications which highlight the utility of memory-hard puzzles. For our first application, we give a construction of a (one-time) memory-hard function (MHF) in the standard model, using our memory-hard puzzles and additionally assuming indistinguishability obfuscation. Our construction is the first provable MHF in the standard model --- prior MHF constructions require idealized models (e.g., random oracle model) to prove security. For our second application, we show any cryptographic puzzle (e.g., memory-hard, time-lock) can be used to construct resource-bounded locally decodable codes (LDC) in the standard model, answering an open question of Blocki, Kulkarni, and Zhou (ITC 2020). Prior constructions required idealized primitives like random oracles and assuming that the encoding algorithm efficiency was not resource-constrained like the channel. In particular, assuming time-lock puzzles or memory-hard puzzles, we construct resource-bounded LDCs which are secure against low-depth channels or channels with small (amortized) area-time complexity, respectively.
Leemon Baird, Pratyay Mukherjee, Rohit Sinha
ePrint ReportPrior practical schemes for time-locked encryption rely on a clock-equipped trusted server, who periodically publishes a time-specific decryption key based on a long-term secret. Their main idea is to model time periods as identities in an identity-based encryption scheme. While such schemes allow encryption to a future time periods, they offer limited support for decryption of past ciphertexts. In particular, they force a client to be online when the key is published, or interact with the server to re-generate the key.
This paper proposes a new notion of time-locked encryption where an aggregated decryption key can be used to decrypt any ciphertext locked to a prior time. Furthermore, we decentralize the trust amongst a number of servers, such that it can tolerate up to a threshold number of (malicious) corruptions. We call our notion threshold aggregated time-locked encryption (TATLE). We propose a practical construction that supports compact decryption keys as well as compact ciphertexts (both logarithmic in the total lifetime). Our construction is based on bilinear pairing and adapts ideas from Canetti et al.'s binary tree encryption [Eurocypt 2003] and Naor et al.'s distributed pseudorandom functions [Eurocrypt 1999].
Martin Albrecht, Léo Ducas
ePrint ReportAfter almost 40 years of cryptanalytic applications, predicting and optimising lattice reduction algorithms remains an active area of research. While we do have theorems bounding the worst-case performance of these algorithms, those bounds are asymptotic and not necessarily tight when applied to practical or even cryptographic instances. Reasoning about the behaviour of those algorithms relies on heuristics and approximations, some of which are known to fail for relevant corner cases.
Decades after Lenstra, Lenstra, and Lovász gave birth to this fascinating and lively research area, this state of affairs became a more pressing issue recently. Motivated by post-quantum security, standardisation bodies, governments and industry started to move towards deploying lattice-based cryptographic algorithms. This spurred the refinement of those heuristics and approximations, leading to a better understanding of the behaviour of these algorithms over the last few years.
Lattice reduction algorithms, such as LLL and BKZ, proceed with repeated local improvements to the lattice basis, and each such local improvement means solving the short(est) vector problem in a lattice of a smaller dimension. Therefore, two questions arise: how costly is it to find those local improvements and what is the global behaviour as those improvements are applied.
While those two questions may not be perfectly independent, we will, in this survey, focus on the second one, namely, the global behaviour of such algorithms, given oracle access for finding local improvements. Our focus on the global behaviour is motivated by our intent to draw more of the community's attention to this aspect. We will take a particular interest in the behaviour of such algorithms on a specific class of lattices, underlying the most popular lattice problems to build cryptographic primitives, namely the LWE problem and the NTRU problem. We will emphasise on the approximations that have been made, their progressive refinements and highlight open problems to be addressed.
Pierre Civit, Maria Potop-Butucaru
ePrint ReportTim Heldmann, Thomas Schneider, Oleksandr Tkachenko, Christian Weinert, Hossein Yalame
ePrint ReportIn this paper, we make MPC practical for developers by automating circuit compilation based on the compiler toolchain LLVM. For this, we develop an LLVM optimizer suite consisting of multiple transform passes that operate on the LLVM intermediate representation (IR) and gradually lower functions to circuit level. Our approach supports various front-end languages (currently C, C++, and Fortran) and takes advantage of powerful source code optimizations built into LLVM. We furthermore make sure to produce circuits that are optimized for MPC, and even offer fully automated post-processing for efficient post-quantum MPC.
We empirically measure the quality of our compilation results and compare them to the state-of-the-art specialized MPC compiler HyCC (Büscher et al., CCS'2018). For all benchmarked HyCC example applications (e.g., biomatch and linear equation solving), our highly generalizable approach achieves similar quality in terms of gate count and composition.
Karim Eldefrawy, Julian Loss, Ben Terner
ePrint ReportIn this paper we ask ``what are optimal thresholds in the cryptographic setting that can be tolerated with such mixes of corruptions and faults?" We develop an expected-constant round protocol tolerating $n > t_r+2t_s+2t_b$. We are unable to prove optimality of our protocol's corruption budget in the general case; however, when we constrain the adversary to either drop all or none of a sender's messages in a round, we prove our protocol achieves an optimal threshold of $n > t_r+t_s+2t_b$. We denote this weakening of a send corruption a \emph{spotty send corruption}.
In light of this difference in corruption tolerance due to our weakening of a send corruption, we ask ``how close (with respect to corruption thresholds) to a byzantine corruption is a send corruption?" We provide a treatment of the difficulty of dealing with send corruptions in protocols with sublinear rounds. As an illustrative and surprising example (even though not in sublinear rounds), we show that the classical Dolev-Strong broadcast protocol degrades from $n > t_b$ corruptions in the byzantine-only model to $n > 2t_s+2t_b$ when send-corrupt parties' outputs must be consistent with honest parties; we also show why other recent dishonest-majority broadcast protocols degrade similarly. We leave open the question of optimal corruption tolerance for both send- and byzantine corruptions.
Wei Jiang
ePrint ReportSi Gao, Elisabeth Oswald, Dan Page
ePrint ReportNils Fleischhacker, Kasper Green Larsen, Mark Simkin
ePrint ReportIn this work we construct robust property-preserving hash functions for the hamming-distance predicate which distinguishes inputs with a hamming distance at least some threshold $t$ from those with distance less than $t$. The security of the construction is based on standard lattice hardness assumptions.
Our construction has several advantages over the best known previous construction by Fleischhacker and Simkin (Eurocrypt 2021). Our construction relies on a single well-studied hardness assumption from lattice cryptography whereas the previous work relied on a newly introduced family of computational hardness assumptions. In terms of computational effort, our construction only requires a small number of modular additions per input bit, whereas the work of Fleischhacker and Simkin required several exponentiations per bit as well as the interpolation and evaluation of high-degree polynomials over large fields. An additional benefit of our construction is that the description of the hash function can be compressed to $\lambda$ bits assuming a random oracle. Previous work has descriptions of length $\bigO{\ell \lambda}$ bits for input bit-length $\ell$, which has a secret structure and thus cannot be compressed.
We prove a lower bound on the output size of any property-preserving hash function for the hamming distance predicate. The bound shows that the size of our hash value is not far from optimal.
Madhurima Mukhopadhyay, Palash Sarkar
ePrint ReportAkashdeep Saha, Urbi Chatterjee, Debdeep Mukhopadhyay, Rajat Subhra Chakraborty
ePrint ReportAmund Askeland, Sondre Rønjom
ePrint ReportJongkil Kim, Seyit Camtepe, Joonsang Baek, Willy Susilo, Josef Pieprzyk, Surya Nepal
ePrint ReportYael Tauman Kalai, Vinod Vaikuntanathan, Rachel Yun Zhang
ePrint Report- First, we show that Kilian's protocol, instantiated with a computationally non-signaling PCP (Brakerski, Holmgren, and Kalai, STOC 2017) and a somewhere statistically binding hash family (Hubacek and Wichs, ITCS 2015), is an SSS argument.
- Secondly, we show that the soundness of SSS arguments can be proved in a straight-line manner, implying that they are also post-quantum sound if the underlying assumption is post-quantum secure. This provides a straightforward proof that Kilian's protocol, instantiated this way, is post-quantum sound under the post-quantum hardness of LWE (though we emphasize that a computationally non-signaling PCP exists only for deterministic languages, and more generally, for specific subclasses of non-deterministic languages such as $\mathsf{NTISP}$, but not for all of $\mathsf{NP}$).
- We put forward a natural conjecture that constant-round SSS arguments can be soundly converted into non-interactive arguments via the Fiat-Shamir transformation. We argue that SSS arguments evade the current Fiat-Shamir counterexamples, including the one for Kilian's protocol (Bartusek, Bronfman, Holmgren, Ma and Rothblum, TCC 2019) by requiring additional properties from both the hash family and the PCP.
As an additional result, we show that by using a computationally non-signaling PCP and a somewhere statistically binding hash family, one can efficiently convert any succinct non-interactive argument (SNARG) for $\mathsf{BatchNP}$ into a SNARG for $\mathsf{P}$.
Sven Heiberg, Kristjan Krips, Jan Willemson
ePrint ReportYongjun Zhao, Huaxiong Wang, Kwok-Yan Lam
ePrint ReportUnfortunately, existing volume-hiding SSE schemes do not support atomic updates (i.e., addition/deletion of an arbitrary keyword-document pair), which is the most common update operation considered in the SSE literature. Meanwhile, recent volumetric attacks (Wang et al., EuroS&P 20 & Blackstone et al., NDSS 20) indeed target dynamic databases.
We initiate a formal study of volume-hiding dynamic SSE. We extend the existing definition of volume-hiding leakage function into the dynamic setting and present efficient constructions VH-DSSE and VH-DSSE^k . VH-DSSE suffers from non-negligible correctness error. To remedy the disadvantage of VH-DSSE, we propose a multi-copy construction VH-DSSE^k that amplifies correctness by parallel repetition. As a side contribution, both VH-DSSE and VH-DSSE^k satisfy the strongest notions of backward-privacy, which is the first one in the literature, to the best of our knowledge.