International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

09 September 2016

University of Edinburgh
Job Posting Job Posting
Postdoctoral research position in blockchain technology at the University of Edinburgh

Applications for the post of a postdoctoral research associate are sought in applied cryptography and computer security with emphasis on blockchain technology and cryptocurrencies. 

The post is full time and is available from March 1st, 2017, or a mutually agreed start date, for a 24 month fixed contract.  An earlier start might be available.

The position is funded by EPSRC project OXCHAIN. The OXCHAIN project seeks to use distributed ledger technology to provide a secure platform for the emerging circular economy and demonstrate it through addressing specific challenges in the operation of the UK-based charity Oxfam.

The position to be filled will be a critical one in the project and will be tasked with the design, analysis and implementation of a novel blockchain system with suitable characteristics to support the development of an internal token economy. The researcher will also maintain a strong connection with other OXCHAIN team members that focus on design aspects and are tasked with the user experience dimension of the project.

Essential qualifications include a Ph.D. in Computer Science and a good understanding of both theoretical and applied concepts in computer security, cryptography, software engineering, distributed systems, peer to peer networks. Desired qualifications include familiarity with cryptocurrencies.

Closing date for applications: 1 March 2017

Contact: Aggelos Kiayias, Chair in Cyber Security and Privacy University of Edinburgh Informatics Forum, Office 5.16, 10 Crichton St, Edinburgh Midlothian EH8 9AB, United Kingdom Tel. +44 (0) 131 6505129, akiayias (at) inf.ed.ac.uk

(please put OXCHAIN position in the e-mail subject)

Expand
Hong Kong Applied Science and Technology Research Institute Company Limited
Job Posting Job Posting

Job Responsibilities:

•To design and develop cryptographic protocols and schemes

•To design, analyze and implement cryptographic systems and related systems such as blockchain

•To study the latest cryptographic algorithms and protocols

Requirements:

•Master degree in computer science, electronic engineering or other relevant disciplines with 3+ years experience; less experience for PhD holders.

•Experience on cryptographic system design and cryptanalysis

•Deep knowledge on number theory and security proofs

•Hands-on experience with C/C++ and Java

•Preferably having experiences on using cryptographic libraries such as OpenSSL, MIRACL, PBC, etc.

•Experience on developing cloud computing systems an advantage, but not a must

•Strong interpersonal and communications skills

•Good command of both written and spoken English and Chinese

Closing date for applications: 15 September 2016

Contact: charlenechoo (at) astri.org

More information: http://www.astri.org/careers/work-at-astri/jobs/software-engineer-applied-cryptography/

Expand
Worcester Polytechnic Institute, USA
Job Posting Job Posting
Vernam Lab at WPI is looking for highly motivated and qualified candidates to fill PhD positions for research in embedded system security and applied cryptography.

Specific topics include:

* Cache-based side-channel attacks; analysis and countermeasures

* Physical fault and side channel analysis and countermeasures

* Secure embedded system design

Candidates should have a strong background in electronics, computer science or applied mathematics with interest in algorithms, computer architecture, and signal processing. Prior experience in software or hardware development, code analysis and code generation and/or signal processing is an asset.

We offer a competitive salary and an international cutting-edge research program in an attractive working environment. WPI is a highly-ranked research university in the Boston area, and offers the opportunity to collaborate with world-class faculty and students in a collegial environment. We maintain close connections with surrounding universities and companies.

Candidates can start as early as January.

Closing date for applications: 1 October 2016

Contact: Thomas Eisenbarth

teisenbarth (at) wpi.edu

More information: http://v.wpi.edu/positions/

Expand

08 September 2016

Steven D. Galbraith, Christophe Petit, Barak Shani, Yan Bo Ti
ePrint Report ePrint Report
We study cryptosystems based on supersingular isogenies. This is an active area of research in post-quantum cryptography. Our first contribution is to give a very powerful active attack on the supersingular isogeny encryption scheme. This attack can only be prevented by using a (relatively expensive) countermeasure proposed by Kirkwood et al. Our second contribution is to show that the security of all schemes of this type depends on the difficulty of computing the endomorphism ring of a supersingular elliptic curve. This result gives significant insight into the difficulty of the isogeny problem that underlies the security of these schemes. Our third contribution is to give a reduction that uses partial knowledge of shared keys to determine an entire shared key. This can be used to retrieve the secret key, given information leaked from a side-channel attack on the key exchange protocol. A corollary of this work is the first bit security result for the supersingular isogeny key exchange: Computing any component of the $j$-invariant is as hard as computing the whole $j$-invariant.

Our paper therefore provides an improved understanding of the security of these cryptosystems. We stress that our work does not imply that these systems are insecure, or that they should not be used. However, it highlights that implementations of these schemes will need to take account of the risks associated with various active and side-channel attacks.
Expand
Qian Guo, Thomas Johansson, Paul Stankovski
ePrint Report ePrint Report
Algorithms for secure encryption in a post-quantum world are currently receiving a lot of attention in the research community, including several larger projects and a standardization effort from NIST. One of the most promising algorithms is the code-based scheme called QC-MDPC, which has excellent performance and a small public key size.

In this work we present a very efficient key recovery attack on the QC-MDPC scheme using the fact that decryption uses an iterative decoding step and this can fail with some small probability. We identify a dependence between the secret key and the failure in decoding. This can be used to build what we refer to as a distance spectrum for the secret key, which is the set of all distances between any two ones in the secret key. In a reconstruction step we then determine the secret key from the distance spectrum. The attack has been implemented and tested on a proposed instance of QC-MDPC for 80 bit security. It successfully recovers the secret key in minutes.

A slightly modified version of the attack can be applied on proposed versions of the QC-MDPC scheme that provides IND-CCA security. The attack is a bit more complex in this case, but still very much below the security level. The reason why we can break schemes with proved CCA security is that the model for these proofs typically does not include the decoding error possibility.
Expand
Zejun Xiang, Wentao Zhang, Zhenzhen Bao, Dongdai Lin
ePrint Report ePrint Report
Division property is a generalized integral property proposed by Todo at EUROCRYPT 2015, and very recently, Todo et al. proposed bit-based division property and applied to SIMON32 at FSE 2016. However, this technique can only be applied to block ciphers with block size no larger than 32 due to its high time and memory complexity. In this paper, we extend Mixed Integer Linear Programming (MILP) method, which is used to search differential characteristics and linear trails of block ciphers, to search integral distinguishers of block ciphers based on division property with block size larger than 32.

Firstly, we study how to model division property propagations of three basic operations (copy, bitwise AND, XOR) and an Sbox operation by linear inequalities, based on which we are able to construct a linear inequality system which can accurately describe the division property propagations of a block cipher given an initial division property. Secondly, by choosing an appropriate objective function, we convert a search algorithm under Todo's framework into an MILP problem, and we use this MILP problem appropriately to search integral distinguishers. As an application of our technique, we have searched integral distinguishers for SIMON, SIMECK, PRESENT, RECTANGLE, LBlock and TWINE. Our results show that we can find 14-, 16-, 18-, 22- and 26-round integral distinguishers for SIMON32, 48, 64, 96 and 128 respectively. Moreover, for two SP-network lightweight block ciphers PRESENT and RECTANGLE, we found 9-round integral distinguishers for both ciphers which are two more rounds than the best integral distinguishers in the literature. For LBlock and TWINE, our results are consistent with the best known ones with respect to the longest distinguishers.
Expand

07 September 2016

Ronald L. Rivest, Jacob C. N. Schuldt
ePrint Report ePrint Report
This paper reconsiders the design of the stream cipher RC4, and proposes an improved variant, which we call ``Spritz'' (since the output comes in fine drops rather than big blocks.)

Our work leverages the considerable cryptanalytic work done on the original RC4 and its proposed variants. It also uses simulations extensively to search for biases and to guide the selection of intermediate expressions.

We estimate that Spritz can produce output with about 24 cycles/byte of computation. Furthermore, our statistical tests suggest that about $2^{81}$ bytes of output are needed before one can reasonably distinguish Spritz output from random output; this is a marked improvement over RC4. [Footnote: However, see Appendix F for references to more recent work that suggest that our estimates of the work required to break Spritz may be optimistic.] In addition, we formulate Spritz as a ``sponge (or sponge-like) function,'' (see Bertoni et al.), which can ``Absorb'' new data at any time, and from which one can ``Squeeze'' pseudorandom output sequences of arbitrary length. Spritz can thus be easily adapted for use as a cryptographic hash function, an encryption algorithm, or a message-authentication code generator. (However, in hash-function mode, Spritz is rather slow.)
Expand
Douglas R. Stinson, Ruizhong Wei
ePrint Report ePrint Report
In this paper, we consider methods whereby a subset of players in a $(k,n)$-threshold scheme can ``repair'' another player's share in the event that their share has been lost or corrupted. This will take place without the participation of the dealer who set up the scheme. The repairing protocol should not compromise the (unconditional) security of the threshold scheme, and it should be efficient, where efficiency is measured in terms of the amount of information exchanged during the repairing process. We study two approaches to repairing. The first method is based on the ``enrollment protocol'' from (Nojoumian-Stinson-Grainger, 2010) which was originally developed to add a new player to a threshold scheme (without the participation of the dealer) after the scheme was set up.The second method distributes ``multiple shares'' to each player, as defined by a suitable combinatorial design. This method results in larger shares, but lower communication complexity, as compared to the first method.
Expand
Matthias Hiller, Michael Pehl, Gerhard Kramer, Georg Sigl
ePrint Report ePrint Report
Physical Unclonable Functions (PUFs) provide cryptographic keys for embedded systems without secure non-volatile key storage. Several error correction schemes for key generation with PUFs were introduced, analyzed and implemented over the last years. This work abstracts from the typical algorithmic level and provides an algebraic view to reveal fundamental similarities and differences in the security of these error correction schemes. An algebraic core is introduced for key generation with Physical Unclonable Functions (PUFs). It computes the secret key through the helper data from the input PUF response and an optional random number. For nearly uniformly distributed PUF responses, the leakage of the secret key and the helper data can be brought to zero if and only if the rank of the algebraic core is equal to the sum of the ranks of the key generating part and the rank of the helper data generating part. This rank criterion has the practical advantage that a security check can be performed for linear codes at an early design stage of an algorithm. The criterion is applied to state-of-the-art approaches to show that fuzzy commitment and systematic low leakage coding are the only analyzed schemes that achieve zero leakage.
Expand
Tetsu Iwata, Kazuhiko Minematsu
ePrint Report ePrint Report
At CCS 2015, Gueron and Lindell proposed GCM-SIV, a provably secure authenticated encryption scheme that remains secure even if the nonce is repeated. While this is an advantage over the original GCM, we first point out that GCM-SIV allows a trivial distinguishing attack with about $2^{48}$ queries, where each query has one plaintext block. This shows the tightness of the security claim and does not contradict the provable security result. However, the original GCM resists the attack, and this poses a question of designing a variant of GCM-SIV that is secure against the attack. We present a minor variant of GCM-SIV, which we call GCM-SIV1, and discuss that GCM-SIV1 resists the attack, and it offers a security trade-off compared to GCM-SIV. As the main contribution of the paper, we explore a scheme with a stronger security bound. We present GCM-SIV2 which is obtained by running two instances of GCM-SIV1 in parallel and mixing them in a simple way. We show that it is secure up to $2^{85.3}$ query complexity, where the query complexity is measured in terms of the total number of blocks of the queries. Finally, we generalize this to show GCM-SIV$r$ by running $r$ instances of GCM-SIV1 in parallel, where $r\ge 3$, and show that the scheme is secure up to $2^{128r/(r+1)}$ query complexity. The provable security results are obtained under the standard assumption that the blockcipher is a pseudorandom permutation.
Expand
Arnold Neumaier, Damien Stehle
ePrint Report ePrint Report
We describe an asymptotically fast variant of the LLL lattice reduction algorithm. It takes as input a basis $\mathbf B \in \mathbb Z^{n \times n}$ and returns a (reduced) basis $\mathbf C$ of the Euclidean lattice $L$ spanned by $\mathbf B$, whose first vector satisfies $||\mathbf c_1|| \leq (1+c)(4/3)^{(n-1)/4} \cdot (\det L)^{1/n}$ for any fixed $c>0$. It terminates within $O(n^{4+\epsilon} \beta^{1+\epsilon})$ bit operations for any $\epsilon >0$, with $\beta = \log \max_i ||\mathbf b_i||$. It does rely on fast integer arithmetic but does not make use of fast matrix multiplication.
Expand
Ivica Nikolic, Yu Sasaki
ePrint Report ePrint Report
A collision search for a pair of $n$-bit unbalanced functions (one is $R$ times more expensive than the other) is an instance of the meet-in-the-middle problem, solved with the familiar standard algorithm that follows the tradeoff $TM=N$, where $T$ and $M$ are time and memory complexities and $N=2^n$. By combining two ideas, unbalanced interleaving and Oorschot-Wiener parallel collision search, we construct an alternative algorithm that follows $T^2 M = R^2 N$, where $M\le R$. Among others, the algorithm solves the well-known open problem: how to reduce the memory of unbalanced collision search.
Expand
Conor Patrick, Bilgiday Yuce, Nahid Farhady Ghalaty, Patrick Schaumont
ePrint Report ePrint Report
Fault attack countermeasures can be implemented by storing or computing sensitive data in redundant form, such that the faulty data can be detected and restored. We present a class of lightweight, portable software countermeasures for block ciphers. Our technique is based on redundant bit-slicing, and it is able to detect faults in the execution of a single instruction. In comparison to earlier techniques, we are able to intercept data faults as well as instruction sequence faults using a uniform technique. Our countermeasure thwarts precise bit-fault injections through pseudo-random shifts in the allocation of data bit-slices. We demonstrate our solution on a full AES design and confirm the claimed security protection through a detailed fault simulation for a 32-bit embedded processor. We also quantify the overhead of the proposed fault countermeasure, and find a minimal increase in footprint (14%), and a moderate performance overhead between 125% to 317%, depending on the desired level of fault-attack resistance.
Expand
Kartik Nayak, Ling Ren, Ittai Abraham, Benny Pinkas
ePrint Report ePrint Report
Oblivious RAM (also known as ORAM) is a cryptographic primitive that allows a trusted client to outsource storage to an untrusted server while hiding the client's memory access patterns to the server. Since the initial work by Goldreich and Ostrovsky three decades ago, there has been a lot of effort to improve ORAMs. In the standard ORAM model, where the server supports only ``read'' and ``write'' operations, the bandwidth blowup between the client and the server has been improved from $O(\log^3 N)$ to $O(\log N)$. On the other hand, with the assumption that the server can perform computation, Oblivious RAM constructions with $O(1)$ bandwidth blowup exists. However, in practice, the improvement in bandwidth comes with a huge cost in terms of server computation and the server computation (homomorphic encryptions) is the new bottleneck. In this work, we propose an Oblivous RAM construction that does not use expensive server computation and yet achieves sub-logarithmic bandwidth blowup.
Expand
Linfeng Zhou
ePrint Report ePrint Report
Functional Encryption (\(\mathsf{FE}\)) generalizes the notion of traditional encryption system by providing fine-grained access control. In a functional encryption scheme, the owner of the secret key can generate restricted functional keys that allow users to obtain specific functions over the encrypted messages and nothing else.

In this work, we show a generic transformation from weakly selective secure functional encryption to selectively secure functional encryption and this transformation preserves the compactness of the \(\mathsf{FE}\) scheme. This result is given by reusing techniques in (Ananth et al., CRYPTO 2015) through a modified approach. Furthermore, combining recent results, we remark that this result gives an alternative approach of recent work by Garg and Srinivasan (ePrint 2016/524). Namely, a single-key weakly selective secure functional encryption scheme, whose ciphertext size is _sublinear_ in the size of the function for which the functional key is issued, implies all other notions of functional encryption generically.
Expand
Jianwei Li
ePrint Report ePrint Report
Let $(\mathbf{b}_1, \ldots, \mathbf{b}_{n})$ be a lattice basis with Gram-Schmidt orthogonalization $(\mathbf{b}_1^{\ast}, \ldots, \mathbf{b}_{n}^{\ast})$, the quantities $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{i}^{\ast}\|$ for $i = 1, \ldots, n$ play important roles in analyzing lattice reduction algorithms and lattice enumeration algorithms. In this paper, we study the problem of minimizing the quantity $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{n}^{\ast}\|$ over all bases $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ of a given $n$-dimensional lattice. We first prove that there exists a basis $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ for any lattice $L$ of dimension $n$ such that $\|\mathbf{b}_1\| = \min_{\mathbf{v} \in L\backslash\{\mathbf{0}\}} \|\mathbf{v}\|$, $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{i}^{\ast}\| \leq i$ and $\|\mathbf{b}_{i}\|/\|\mathbf{b}_{i}^{\ast}\| \leq i^{1.5}$ for $1 \leq i \leq n$. This leads us to introduce a new NP-hard computational problem, that is, the smallest ratio problem (SRP): given an $n$-dimensional lattice $L$, find a basis $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ of $L$ such that $\|\mathbf{b}_{1}\|/\|\mathbf{b}_{n}^{\ast}\|$ is minimal. The problem inspires the new lattice invariant $\mu_{n}(L) = \min\{\|\mathbf{b}_1\|/\|\mathbf{b}_n^{\ast}\|: (\mathbf{b}_1, \ldots, \mathbf{b}_n) \textrm{ is a basis of } L\}$ and new lattice constant $\mu_{n} = \max \mu_{n}(L)$ over all $n$-dimensional lattices $L$: both the minimum and maximum are justified. The properties of $\mu_{n}(L)$ and $\mu_{n}$ are discussed. We also present an exact algorithm and an approximation algorithm for SRP.

To the best of our knowledge, this is the first sound study of SRP. Our work provides a new perspective on both the quality limits of lattice reduction algorithms and complexity estimates of enumeration algorithms.
Expand
Onur Demir, Wenjie Xiong, Faisal Zaghloul, Jakub Szefer
ePrint Report ePrint Report
Variety of computing systems have been proposed to provide protection for sensitive code or data through hardware or software mechanisms. This paper surveys the landscape of security verification approaches and techniques for hardware/software systems at different levels: from a software-application level all the way to the physical hardware level. Different existing projects are compared, based on the tools used and security aspects being examined. Since many systems require both hardware and software components to work together to provide the system's promised security protections, it is no longer sufficient to verify the software levels or the hardware levels in a mutually exclusive fashion. This survey highlights common sets of system levels that are verified by the different existing projects and presents to the readers the state of the art in hardware security verification. Few approaches come close to providing full-system verification, and there is still much room for improvement. In this survey, readers will gain insights into existing approaches in formal modeling and security verification of hardware/software systems, and gain insights for future research directions.
Expand

06 September 2016

Felix Heuer, Bertram Poettering
ePrint Report ePrint Report
The confidentiality notion of security against selective opening attacks considers adver- saries that obtain challenge ciphertexts and are allowed to adaptively open them, thereby revealing the encrypted message and the randomness used to encrypt. The SO notion is stronger than that of CCA security and is often required when formally arguing towards the security of multi-user applications. While different ways of achieving correspondingly secure schemes are known, as they generally employ expensive asymmetric building blocks like lossy trapdoor functions or lossy en- cryption, such constructions are routinely left aside by practitioners and standardization bodies. So far, formal arguments towards the SO security of schemes used in practice (e.g., for email encryption) are not known. In this work we shift the focus from the asymmetric to the symmetric building blocks of PKE and prove the following statement: If a PKE scheme is composed of a key encapsulation mechanism (KEM) and a blockcipher-based data encapsulation mechanism (DEM), and the DEM meets spe- cific combinatorial properties, then the PKE scheme offers SO security, in the ideal cipher model. Fortunately, as we show, the required properties hold for popular modes of operation like CTR, CBC, CCM, and GCM. This paper not only establishes the corresponding theoretical framework of analysis, but also contributes very concretely to practical cryptography by concluding that selective opening security is given for many real-world schemes.
Expand
Kamalesh Acharya, Ratna Dutta
ePrint Report ePrint Report
Broadcast encryption with dealership (BED) has been proposed to achieve more innovative and scalable business models for broadcast services. It has an extensive application future. However, designing secure BED is a challenging task. The only known BED construction sofar is by Gritti et al. We aim to raise the pro le of BED primitives which has not received much attention despite of its importance. This paper presents a selectively chosen plaintext attack (CPA) secure BED scheme supporting maximum number of accountability and privacy (hides the group of users from broadcaster). Our scheme is a key encapsulation mechanism and practically more efficient. It reduces the parameter sizes and computation cost compared to Gritti et al. More interestingly, the broadcaster does not need to rely on users to detect the dishonest dealer. We provide concrete security analysis of our design under reasonable assumptions.
Expand
Shuichi Katsumata, Shota Yamada
ePrint Report ePrint Report
In this paper, we present new adaptively secure identity-based encryption (IBE) schemes. One of the distinguishing property of the schemes is that it achieves shorter public parameters than previous schemes. Both of our schemes follow the general framework presented in the recent IBE scheme of Yamada (Eurocrypt 2016), employed with novel techniques tailored to meet the underlying algebraic structure to overcome the difficulties arising in our specific setting. Specifically, we obtain the following:

- Our first scheme is proven secure under the ring learning with errors (RLWE) assumption and achieves the best asymptotic space efficiency among existing schemes from the same assumption. The main technical contribution is in our new security proof that exploits the ring structure in a crucial way. Our technique allows us to greatly weaken the underlying hardness assumption (e.g., we assume the hardness of RLWE with a fixed polynomial approximation factor whereas Yamada's scheme requires a super-polynomial approximation factor) while improving the overall efficiency.

- Our second IBE scheme is constructed on bilinear maps and is secure under the $3$-computational bilinear Diffie-Hellman exponent assumption. This is the first IBE scheme based on the hardness of a computational/search problem, rather than a decisional problem such as DDH and DLIN on bilinear maps with sub-linear public parameter size.
Expand
◄ Previous Next ►