IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
08 June 2020
Sumanta Sarkar, Yu Sasaki, Siang Meng Sim
ePrint ReportShashank Agrawal, Saikrishna Badrinarayanan, Payman Mohassel, Pratyay Mukherjee, Sikhar Patranabis
ePrint ReportWe propose a new generic framework called Biometric Enabled Threshold Authentication (BETA) to protect sensitive client-side information like biometric templates and cryptographic keys. Towards this, we formally introduce the notion of Fuzzy Threshold Tokenizer (FTS) where an initiator can use a "close" biometric measurement to generate an authentication token if at least $t$ (the threshold) devices participate. We require that the devices only talk to the initiator, and not to each other, to capture the way user devices are connected in the real world. We use the universal composability (UC) framework to model the security properties of FTS, including the unforgeability of tokens and the privacy of the biometric values (template and measurement), under a malicious adversary. We construct three protocols that meet our definition.
Our first two protocols are general feasibility results that work for any distance function, any threshold $t$ and tolerate the maximal (i.e. $t-1$) amount of corruption. They are based on any two round UC-secure multi-party computation protocol in the standard model (with a CRS) and threshold fully homomorphic encryption, respectively. We show how to effectively use these primitives to build protocols in a constrained communication model with just four rounds of communication.
For the third protocol, we consider inner-product based distance metrics (cosine similarity, Euclidean distance, etc.) specifically, motivated by the recent interest in its use for face recognition. We use Paillier encryption, efficient NIZKs for specific languages, and a simple garbled circuit to build an efficient protocol for the common case of $n=3$ devices with one compromised.
Lübeck, Duitsland, 18 November - 20 November 2020
Event CalendarSubmission deadline: 3 July 2020
Notification: 4 September 2020
Technology Innovation Institute - Abu Dhabi, UAE
Job PostingTechnology Innovation Institute (TII) is a publicly funded research institute, based in Abu Dhabi, United Arab Emirates. It is home to a diverse community of leading scientists, engineers, mathematicians, and researchers from across the globe, transforming problems and roadblocks into pioneering research and technology prototypes that help move society ahead. At TII, we help society to overcome its biggest hurdles through a rigorous approach to scientific discovery and inquiry, using state-of-the-art facilities and collaboration with leading international institutions
As a Symmetric Cryptography Researcher, you will:
Minimum qualifications:
Closing date for applications:
Contact: Mehdi Messaoudi - Talent Acquisition Partner
Technology Innovation Institute - Abu Dhabi, UAE
Job Posting
Minimum Qualifications:
Closing date for applications:
Contact: Mehdi Messaoudi - Talent Acquisition Partner
More information: https://www.linkedin.com/company/tiiuae/about/
ISAE SUPAERO, Toulouse, France
Job PostingIn late 2016, NIST issued a call for proposals for the standardization of cryptographic systems resistant to a quantum computer for encryption, key exchange and signature primitives (hereafter NIST PQC). The standardization process is currently ending its second phase, with only 26 candidates remaining.
Each submission comes with a software implementation, targeting standard security levels for widespread applications, such as e-commerce.
Topic 1: General improvement of post-quantum primitives
During this thesis, the successful candidate will study the NIST PQC submissions still running for standardization and will propose modifications that improve the submissions in general (e.g. tighter reductions, improved theoretical error rate analysis, etc.), or that provide specific advantages in constrained settings (e.g. soft/hard implementation simplicity, reduced bandwidth, reduced latency, etc.).
Topic 2: Hardware implementations of cryptographic schemes based on error-correcting codes and/or lattices
The software performance of NIST PQC submissions has been thoroughly studied. On the other hand, the hardware performance (e.g. energy cost or gate cost on broadly available FPGAs) of many submissions is still not very well understood. During this thesis, the successful candidate will study code-based and/or lattice-based NIST PQC submissions, and propose hardware implementations of both the original submisssions and variations designed by the candidate to improve hardware performance.
Closing date for applications:
Contact: Carlos Aguilar Melchor
Technology Innovation Institute - Abu Dhabi, UAE
Job PostingAs a Post-Quantum Crypto Researcher, you will:
Minimum qualifications:
Closing date for applications:
Contact: Mehdi Messaoudi - Talent Acquisition Partner
More information: https://www.linkedin.com/company/tiiuae/about/
07 June 2020
Alexander Munch-Hansen, Claudio Orlandi, Sophia Yakoubov
ePrint ReportOur contributions in this paper are two-fold. First, we strengthen existing definitions of threshold ring signatures in a natural way; we demand that a signer cannot be de-anonymized even by their fellow signers. This is crucial, since in applications where a signer's anonymity is important, we do not want that anonymity to be compromised by a single insider.
Second, we give the first rigorous construction of a threshold ring signature with size independent of $n$, the number of users in the larger group. Instead, our signatures have size linear in $t$, the number of signers. This is also a very important contribution; signers should not have to choose between achieving their desired degree of anonymity (possibly very large $n$) and their need for communication efficiency.
T-H. Hubert Chan, Naomi Ephraim, Antonio Marcedone, Andrew Morgan, Rafael Pass, Elaine Shi
ePrint ReportThese works, however, leave open the question of how to set the puzzle difficulty in a setting where the computational power in the network is changing. Nakamoto's protocol indeed also includes a description of a difficutly update procedure. A recent work by Garay et al. (Crypto'17) indeed shows a variant of this difficulty adjustment procedure can be used to get a sound protocol as long as the computational power does not change too fast --- however, under two restrictions: 1) their analysis assumes that the attacker cannot delays network messages, and 2) the changes in computational power in the network changes are statically set (i.e., cannot be adaptively selected by the adversary). In this work, we show the same result but without these two restrictions, demonstrating the soundness of a (slightly different) difficulty update procedure, assuming only that the computational power in the network does not change too fast (as a function of the maximum network message delays); as an additional contribution, our analysis yields a tight bound on the ``chain quality'' of the protocol.
Riad S. Wahby, Dan Boneh, Christopher Jeffrey, Joseph Poon
ePrint ReportIn this work, we address this issue by defining a private airdrop and describing concrete schemes for widely-used user credentials, such as those based on ECDSA and RSA. Our private airdrop for RSA builds upon a new zero-knowledge argument of knowledge of the factorization of a committed secret integer, which may be of independent interest. We also design a private genesis airdrop that efficiently sends private airdrops to millions of users at once. Finally, we implement and evaluate. Our fastest implementation takes 40--180 ms to generate and 3.7--10 ms to verify an RSA private airdrop signature. Signatures are 1.8--3.3 kiB depending on the security parameter.
05 June 2020
Matthias Fitzi, Peter Gazi, Aggelos Kiayias, Alexander Russell
ePrint ReportOur main result is a black-box security-amplifying combiner based on parallel composition of $m$ blockchains that achieves $\Theta(m)$-fold security amplification or, equivalently, $\Theta(m)$-fold reduction in latency. Our construction breaks the latency barrier to achieve, for the first time, a worst-case constant-time-settlement ledger based purely on Nakamoto longest-chain consensus: Transaction settlement can be accelerated to a constant multiple of block propagation time with negligible error.
Operationally, our construction shows how to view any family of blockchains as a unified, virtual ledger without requiring any coordination among the chains or any new protocol metadata. Users of the system have the option to inject a transaction into a single constituent blockchain or--if they desire accelerated settlement--all of the constituent blockchains. Our presentation and proofs introduce a new formalism for reasoning about blockchains, the dynamic ledger, and articulate our constructions as transformations of dynamic ledgers that amplify security. We additionally illustrate the versatility of this formalism by presenting a class of robust-combiner constructions for blockchains that can protect against complete adversarial control of a minority of a family of blockchains.
Chiara Spadafora, Riccardo Longo, Massimiliano Sala
ePrint ReportWenbo MAO, Wenxiang WANG
ePrint ReportLeonie Reichert, Samuel Brack, Björn Scheuermann
ePrint ReportSebastien Carre, Sylvain Guilley, Olivier Rioul
ePrint ReportCrypto Group at IST Austria
ePrint ReportThere are many important aspects one wants those protocols to achieve, in particular simplicity/efficiency, privacy and robustness, the latter including some security against false positives, that is, users getting alerts despite not having been in proximity with a diagnosed user.
In the existing proposals one can trigger false positives on a massive scale by an ``inverse-Sybil" attack, where a large number of devices (malicious users or hacked phones) pretend to be the same user, such that later, just a single person needs to be diagnosed (and allowed to upload) to trigger an alert for all users that were in proximity to any of this large group of devices.
We propose the first protocols that do not succumb to such attacks assuming the devices involved in the attack do not constantly communicate, which we observe is a necessary assumption. Our first protocol requires devices to non-interactively exchange values (like e.g. in DESIRE), while the second requires that the devices have access to some location dependent coordinate (like coarse GPS coordinates or cell tower IDs).
The high level idea of the protocols is to derive the values to be broadcasted by a hash chain, so that two (or more) devices who want to launch an inverse Sybil attack will not be able to connect their respective chains and thus only one of them will be able to upload. Apart from achieving strong privacy and good efficiency, a main challenge is to force the chains on different devices to divert, which we do by infusing unpredictable data (randomness from encounters in the first, location data in the second protocol). Our protocols also achieve security against replay, and the second even against relay attacks.
Avijit Dutta, Mridul Nandi, Abishanka Saha
ePrint ReportBehzad Abdolmaleki, Helger Lipmaa, Janno Siim, Michał Zając
ePrint ReportSahiba Suryawanshi, Dhiman Saha, Satyam Sachan
ePrint ReportChao Sun, Mehdi Tibouchi, Masayuki Abe
ePrint ReportIn this paper, we first examine more generally how the hardness of the problem varies with the number of available samples. Under standard heuristics on the Arora--Ge polynomial system, we show that, for any $\epsilon >0$, binary error LWE can be solved in polynomial time $n^{O(1/\epsilon)}$ given $\epsilon\cdot n^{2}$ samples. Similarly, it can be solved in subexponential time $2^{\tilde O(n^{1-\alpha})}$ given $n^{1+\alpha}$ samples, for $0<\alpha<1$.
As a second contribution, we also generalize the binary error LWE to problem the case of a non-uniform error probability, and analyze the hardness of the non-uniform binary error LWE with respect to the error rate and the number of available samples. We show that, for any error rate $0 < p < 1$, non-uniform binary error LWE is also as hard as worst-case lattice problems provided that the number of samples is suitably restricted. This is a generalization of Micciancio and Peikert's hardness proof for uniform binary error LWE. Furthermore, we also discuss attacks on the problem when the number of available samples is linear but significantly larger than $n$, and show that for sufficiently low error rates, subexponential or even polynomial time attacks are possible.