International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

22 July 2018

University of Luxembourg/ Centre for Security and Trust
Job Posting Job Posting
Post-doc information Assurance, APSIA Research Group University of Luxembourg and Centre for Security and Trust

The Applied Security and Information Assurance (APSIA) is seeking to recruit a highly motivated post-doc with a strong research profile to complement and strengthen the group’s existing expertise. Applications from candidates with expertise in the core areas of the group are welcome, but consideration will also be given to candidates with expertise that would extend our expertise, for example: post-quantum security, FinTech and Distributed Ledger Technologies.

The APSIA team, led by Prof. Peter Y. A. Ryan, is part of the SnT and is a dynamic and growing research group, some 20 strong, performing cutting edge research in information assurance, cryptography, and privacy. The group specializes in the mathematical modelling of security mechanisms and systems, especially crypto protocols (classical and quantum), and socio-technical systems. The group is particularly strong in verifiable voting systems.

For further information you may check: www.securityandtrust.lu and https://wwwen.uni.lu/snt/research/apsia.

Research Associates (Postdocs) in Information Assurance (M/F)

Ref: 50013420 (R-STR-5004-00-B)

Fixed Term Contract 2 years (CDD), full-time position (40 hrs/week),

Number of positions: 1

Start day: Summer\\autumn 2018 upon agreement.

Your Role

The successful candidate will contribute to the research goals of the APSIA group. The APSIA Group specializes in the design and analysis of secure systems:

Cryptographic Protocols (classical and quantum)

Cryptographic Algorithms and Primitives

Verifiable Voting Schemes

Socio-Technical Analysis of Security

Privacy Enhancing Technologies

Closing date for applications: 17 August 2018

Contact: P Y A Ryan

Peter.Ryan (at) uni.lu.

More information: http://emea3.mrted.ly/1wfwn

Expand
Ruhr University Bochum
Job Posting Job Posting
The Chair for Security Engineering at Ruhr-Universität Bochum is searching for highly motivated and qualified PhD candidates in system security engineering and applied cryptography. Specific topics include:

• Implementation of security architectures in hardware and software

• Technologies and countermeasures against microarchitectural attacks

• Security-oriented software compilation

• Tools and frameworks for secure hardware implementations

• Applied and Post-Quantum Cryptography

If you would describe yourself highly motivated, knowledgeable in security and willing to perform creative and deep research, please consider this job opening. You have a degree in IT-security, computer science, electronics or applied mathematics. Prior experience in low-level programming, code analysis, cryptography and/or machine learning are an asset. Publications at relevant conferences such as USENIX Security, CCS, S&P, CHES, CRYPTO, EUROCRYPT are expected.

Please provide a resume, transcripts, a motivational statement and contact information of at least two references.

Closing date for applications: 10 August 2018

Contact: Tim Güneysu tim.gueneysu (at) rub.de

More information: https://www.stellenwerk-bochum.de/

Expand

21 July 2018

1 October 2018
Event Calendar Event Calendar
Event date: 1 October 2018
Submission deadline: 1 October 2018
Notification: 15 December 2018
Expand

19 July 2018

Junichi Tomida, Katsuyuki Takashima
ePrint Report ePrint Report
Inner product functional encryption (IPFE), introduced by Abdalla et al. (PKC2015), is a kind of functional encryption supporting only inner product functionality. All previous IPFE schemes are bounded schemes, meaning that the vector length that can be handled in the scheme is fixed in the setup phase. In this paper, we propose the first unbounded IPFE schemes, in which we do not have to fix the lengths of vectors in the setup phase and can handle (a priori) unbounded polynomial lengths of vectors. Our first scheme is private-key based and fully function hiding. That is, secret keys hide the information of the associated function. Our second scheme is public-key based and provides adaptive security in the indistinguishability based security definition. Both our schemes are based on SXDH, which is a well-studied standard assumption, and secure in the standard model. Furthermore, our schemes are quite efficient, incurring an efficiency loss by only a small constant factor from previous bounded function hiding schemes.
Expand
Maliheh Shirvanian, Stanislaw Jarecki, Hugo Krawczyk, Nitesh Saxena
ePrint Report ePrint Report
Password managers (aka stores or vaults) allow a user to store and retrieve (usually high-entropy) passwords for her multiple password-protected services by interacting with a "device" serving the role of the manager (e.g., a smartphone or an online third-party service) on the basis of a single memorable (low-entropy) master password. Existing password managers work well to defeat offline dictionary attacks upon web service compromise, assuming the use of high-entropy passwords is enforced. However, they are vulnerable to leakage of all passwords in the event the device is compromised, due to the need to store the passwords encrypted under the master password and/or the need to input the master password to the device (as in smartphone managers). Evidence exists that password managers can be attractive attack targets.

In this paper, we introduce a novel approach to password management, called SPHINX, which remains secure even when the password manager itself has been compromised. In SPHINX the information stored on the device is information theoretically independent of the user's master password --- an attacker breaking into the device learns no information about the master password or the user's site-specific passwords. Moreover, an attacker with full control of the device, even at the time the user interacts with it, learns nothing about the master password --- the password is not entered into the device in plaintext form or in any other way that may leak information on it. Unlike existing managers, SPHINX produces strictly high-entropy passwords and makes it compulsory for the users to register these randomized passwords with the web services, hence fully defeating offline dictionary attack upon service compromise. The design and security of SPHINX is based on the device-enhanced PAKE model of Jarecki et al. that provides the theoretical basis for this construction and is backed by rigorous cryptographic proofs of security.

While SPHINX is suitable for different device and online platforms, in this paper, we report on its concrete instantiation on smartphones given their popularity and trustworthiness as password managers (or even two-factor authentication). We present the design, implementation and performance evaluation of SPHINX, offering prototype browser plugins, smartphone apps and transparent device-client communication. Based on our inspection analysis, the overall user experience of SPHINX improves upon current managers. We also report on a lab-based usability study of SPHINX, which indicates that users' perception of SPHINX security and usability is high and satisfactory when compared to regular password-based authentication. Finally, we discuss how SPHINX may be extended to an online service for the purpose of back-up or as an independent password manager.
Expand
Kimmo Järvinen, Ágnes Kiss, Thomas Schneider, Oleksandr Tkachenko, Zheng Yang
ePrint Report ePrint Report
In the last decade, location information became easily obtainable using off-the-shelf mobile devices. This gave a momentum to developing Location Based Services (LBSs) such as location proximity detection, which can be used to find friends or taxis nearby. LBSs can, however, be easily misused to track users, which draws attention to the need of protecting privacy of these users.

In this work, we address this issue by designing, implementing, and evaluating multiple algorithms for Privacy-Preserving Location Proximity (PPLP) that are based on different secure computation protocols. Our PPLP protocols are well-suited for different scenarios: for saving bandwidth, energy/computational power, or for faster runtimes. Furthermore, our algorithms have runtimes of a few milliseconds to hundreds of milliseconds and bandwidth of hundreds of bytes to one megabyte. In addition, the computationally most expensive parts of the PPLP computation can be precomputed in our protocols, such that the input-dependent online phase runs in just a few milliseconds.
Expand
Bernhard Jungk, Richard Petri, Marc Stöttinger
ePrint Report ePrint Report
The current state of the art of Boolean masking for the modular addition operation in software has a very high performance overhead. Firstly, the instruction count is very high compared to a normal addition operation. Secondly, until recently, the entropy consumed by such protections was also quite high. Our paper significantly improves both aspects, by applying the Threshold Implementation (TI) methodology with two shares and by reusing internal values as randomness source in such a way that the uniformity is always preserved. Our approach performs considerably faster compared to the previously known masked addition and subtraction algorithms by Coron et al. and Biryukov et al. improving the state of the art by 36%, if we only consider the number of ARM assembly instructions. Furthermore, similar to the masked adder from Biryukov et al. we reduce the amount of randomness and only require one bit additional entroy per addition, which is a good trade-off for the improved performance. We applied our improved masked adder to ChaCha20, for which we provide two new first-order protected implementations and achieve a 36% improvement over the best published result for ChaCha20 using an ARM Cortex-M4 microprocessor.
Expand
Diana Maimut, George Teseleanu
ePrint Report ePrint Report
eSTREAM brought to the attention of the cryptographic community a number of stream ciphers including Grain v0 and its revised version Grain v1. The latter was selected as a finalist of the competition's hardware-based portfolio. The Grain family includes two more instantiations, namely Grain 128 and Grain 128a.

The scope our paper is to provide an insight on how to obtain secure configurations of the Grain family of stream ciphers. We propose different variants for Grain and analyze their security with respect to slide attacks. More precisely, as various attacks against initialization algorithms of Grain were discussed in the literature, we study the security impact of various parameters which may influence the LFSR's initialization scheme.
Expand
Howard Wu, Wenting Zheng, Alessandro Chiesa, Raluca Ada Popa, Ion Stoica
ePrint Report ePrint Report
Recently there has been much academic and industrial interest in practical implementations of *zero knowledge proofs*. These techniques allow a party to *prove* to another party that a given statement is true without revealing any additional information. In a Bitcoin-like system, this allows a payer to prove validity of a payment without disclosing the payment's details.

Unfortunately, the existing systems for generating such proofs are very expensive, especially in terms of memory overhead. Worse yet, these systems are "monolithic", so they are limited by the memory resources of a single machine. This severely limits their practical applicability.

We describe DIZK, a system that *distributes* the generation of a zero knowledge proof across machines in a compute cluster. Using a set of new techniques, we show that DIZK scales to computations of up to billions of logical gates (100x larger than prior art) at a cost of 10$\mu$s per gate (100x faster than prior art). We then use DIZK to study various security applications.
Expand

18 July 2018

Shiva Prasad Kasiviswanathan, Adam Smith
ePrint Report ePrint Report
In this note we give a precise formulation of "resistance to arbitrary side information" and show that several relaxations of differential privacy imply it. The formulation follows the ideas originally due to Dwork and McSherry, stated implicitly in [Dwork06]. This is, to our knowledge, the first place such a formulation appears explicitly. The proof that relaxed definitions satisfy the Bayesian formulation is new.
Expand
Zilong Wang, Honggang Hu
ePrint Report ePrint Report
Lattice-based cryptographic primitives are believed to have the property against attacks by quantum computers. In this work, we present a KEA-style authenticated key exchange protocol based on the ring learning with errors problem whose security is proven in the BR model with weak perfect forward secrecy. With properties of KEA such as implicit key authentication and simplicity, our protocol also enjoys many properties of lattice-based cryptography, namely asymptotic efficiency, conceptual simplicity, worst-case hardness assumption, and resistance to attacks by quantum computers. Our lattice-based authenticated key exchange protocol is more efficient than the protocol of Zhang et al. (EUROCRYPT 2015) with more concise structure, smaller key size and lower bandwidth. Also, our protocol enjoys the advantage of optimal online efficiency and we improve our protocol with pre-computation.
Expand
Ralph Ankele, Stefan Kölbl
ePrint Report ePrint Report
Resistance against differential cryptanalysis is an important design criteria for any modern block cipher and most designs rely on finding some upper bound on probability of single differential characteristics. However, already at EUROCRYPT'91, Lai et al. comprehended that differential cryptanalysis rather uses differentials instead of single characteristics.

In this paper, we consider exactly the gap between these two approaches and investigate this gap in the context of recent lightweight cryptographic primitives. This shows that for many recent designs like Midori, Skinny or Sparx one has to be careful as bounds from counting the number of active S-boxes only give an inaccurate evaluation of the best differential distinguishers. For several designs we found new differential distinguishers and show how this gap evolves. We found an 8-round differential distinguisher for Skinny-64 with a probability of $2^{-56.93}$, while the best single characteristic only suggests a probability of $2^{-72}$. Our approach is integrated into publicly available tools and can easily be used when developing new cryptographic primitives.

Moreover, as differential cryptanalysis is critically dependent on the distribution over the keys for the probability of differentials, we provide experiments for some of these new differentials found, in order to confirm that our estimates for the probability are correct. While for Skinny-64 the distribution over the keys follows a Poisson distribution, as one would expect, we noticed that Speck-64 follows a bimodal distribution, and the distribution of Midori-64 suggests a large class of weak keys.
Expand
Zahra Eskandari, Andreas Brasen Kidmose, Stefan Kölbl, Tyge Tiessen
ePrint Report ePrint Report
The division property method is a technique to determine integral distinguishers on block ciphers. While the complexity of finding these distinguishers is higher, it has recently been shown that MILP and SAT solvers can efficiently find such distinguishers. In this paper, we provide a framework to automatically find those distinguishers which solely requires a description of the cryptographic primitive. We demonstrate that by finding integral distinguishers for 30 primitives with different design strategies.

We provide several new or improved bit-based division property distinguishers for ChaCha, Chaskey, DES, GIFT, LBlock, Mantis, Qarma, RoadRunner, Salsa and SM4. Furthermore, we present an algorithm to find distinguishers with lower data complexity more efficiently.
Expand

17 July 2018

Joppe W. Bos, Simon Friedberger, Marco Martinoli, Elisabeth Oswald, Martijn Stam
ePrint Report ePrint Report
Lattice-based schemes are among the most promising post-quantum schemes, yet the effect of both parameter and implementation choices on their side-channel resilience is still poorly understood. Aysu et al. (HOST'18) recently investigated single-trace attacks against the core lattice operation, namely multiplication between a public matrix and a "small" secret vector, in the context of a hardware implementation. We complement this work by considering single-trace attacks against software implementations of "ring-less" LWE-based constructions. Specifically, we target Frodo, one of the submissions to the standardisation process of NIST, when implemented on an (emulated) ARM Cortex M0 processor. We confirm Aysu et al.'s observation that a standard divide-and-conquer attack is insufficient and instead we resort to a sequential, extend-and-prune approach. In contrast to Aysu et al. we find that, in our setting where the power model is far from being as clear as theirs, both profiling and less aggressive pruning are needed to obtain reasonable key recovery rates for SNRs of practical relevance. Our work drives home the message that parameter selection for LWE schemes is a double-edged sword: the schemes that are deemed most secure against (black-box) lattice attacks can provide the least security when considering side-channels. Finally, we suggest some easy countermeasures that thwart standard extend-and-prune attacks.
Expand
James Howe, Tobias Oder, Markus Krausz, Tim Güneysu
ePrint Report ePrint Report
Lattice-based cryptography is one of the most promising candidates being considered to replace current public-key systems in the era of quantum computing. In 2016, Bos et al. proposed the key exchange scheme FrodoCCS, that is also a submission to the NIST post-quantum standardization process, modified as a key encapsulation mechanism (FrodoKEM). The security of the scheme is based on standard lattices and the learning with errors problem. Due to the large parameters, standard lattice-based schemes have long been considered impractical on embedded devices. The FrodoKEM proposal actually comes with parameters that bring standard lattice-based cryptography within reach of being feasible on constrained devices. In this work, we take the final step of efficiently implementing the scheme on a low-cost FPGA and microcontroller devices and thus making conservative post-quantum cryptography practical on small devices. Our FPGA implementation of the decapsulation (the computationally most expensive operation) needs 7,220 look-up tables (LUTs), 3,549 flip-flops (FFs), a single DSP, and only 16 block RAM modules. The maximum clock frequency is 162 MHz and it takes 20.7 ms for the execution of the decapsulation. Our microcontroller implementation has a 66% reduced peak stack usage in comparison to the reference implementation and needs 266 ms for key pair generation, 284 ms for encapsulation, and 286 ms for encapsulation. Our results contribute to the practical evaluation of a post-quantum standardization candidate.
Expand
Sven Heiberg, Ivo Kubjas, Janno Siim, Jan Willemson
ePrint Report ePrint Report
This paper takes a critical look at the recent trend of building electronic voting systems on top of block chain technology. Even though being very appealing from the election integrity perspective, block chains have numerous technical, economical and even political drawbacks that need to be taken into account. Selecting a good trade-off between desirable properties and restrictions imposed by different block chain implementations is a highly non-trivial task. This paper aims at bringing some clarity into performing this task. We will mostly be concentrating on public permissionless block chains and their applications as bulletin board implementations as these are the favourite choices in majority of the recent block chain based voting protocol proposals.
Expand
Ethan Cecchetti, Ian Miers, Ari Juels
ePrint Report ePrint Report
We present the first provably secure, practical approach to proving file replication (or other erasure coding) in distributed storage networks (DSNs). Storing multiple copies of a file $F$ is essential in DSNs to ensure against file loss in the event of faulty ser vers or corrupt data. The public nature of DSNs, however, makes this goal challenging: no secret keys or trusted entities are available. Files must therefore be encoded and decoded using public coins---i.e., without encryption or other secret-key operations---and retention of files by servers in the network must be publicly verifiable.

We introduce and formalize the notion of a a public incompressible encoding (PIE), a tool that allows for file-replication proofs in this public setting. A PIE enables public verification that a server is (nearly) entirely storing a replicated encoding $G$ of a target file $F$, and has not deduplicated or otherwise compressed $G$ to save storage. In a DSN with monetary rewards or penalties, a PIE helps ensure that an economically rational server is incentivized to store $G$ and thus replicate $F$ honestly.

We present a specific PIE based on a novel graph construction, called a Dagwood Sandwich Graph (DSaG), that includes long paths even when an adversary selectively discards edges. This PIE ensures that a cheating server must perform a large (and completely tunable) number of costly sequential cryptographic operations to recover any blocks of $G$ it chooses to discard. By periodically challenging the server to return randomly selected blocks of $G$ and timing the responses, the DSN can thus verify that a server is storing $G$ intact.

We prove the security of our PIE construction and present performance evaluations demonstrating that it is efficient in practice---empirically within a factor of 6.2 of optimal by one metric. Our proposed PIE offers a valuable basic tool for building DSNs, such as the proposed Filecoin system, as well as for other challenging file-storage needs in public settings. PIEs also meet the critical security requirements for such applications: they preclude demonstrated attacks involving parallelism and acceleration via ASICs and other custom hardware.
Expand
Oksana Kulyk, Melanie Volkamer
ePrint Report ePrint Report
A well-known issue in electronic voting is the risk of manipulation of the cast vote. For countering this risk, a number of methods have been proposed that enable the voter to verify that their cast vote actually represents their intention, the so-called cast-as-intended verification. Yet, the empirical studies on the voter's behaviour towards using these methods show that often only a small amount of voters attempts the verification or succeeds in performing it. Poor usability of the verification procedure has been often named as the main reason for such a failure of the voters to verify. Research into human factors in other security domains, however, reveals other reasons aside from poor usability, that hinder the proper adoption of security practices among end users. In this paper we discuss these factors with respect to their applicability to cast-as-intended verification. Our results indicate, that many of these factors are potentially relevant in the electronic voting context, too. Correspondingly, we conclude that additional measures aside from ensuring the usability of the cast as intended verification mechanisms are required in order to make sure that the voters successfully verify the integrity of their votes. As such, corresponding mechanisms are proposed.
Expand

16 July 2018

Angshuman Karmakar, Jose Maria Bermudo Mera, Sujoy Sinha Roy, Ingrid Verbauwhede
ePrint Report ePrint Report
The CCA-secure lattice-based post-quantum key encapsulation scheme Saber is a candidate in the NIST's post-quantum cryptography standardization process. In this paper, we study the implementation aspects of Saber in resource-constrained microcontrollers from the ARM Cortex-M series which are very popular for realizing IoT applications. In this work, we carefully optimize various parts of Saber for speed and memory. We exploit digital signal processing instructions and efficient memory access for a fast implementation of polynomial multiplication. We also use memory efficient Karatsuba and just-in-time strategy for generating the public matrix of the module lattice to reduce the memory footprint. We also show that our optimizations can be combined with each other seamlessly to provide various speed-memory trade-offs. Our speed optimized software takes just 1,147K, 1,444K, and 1,543K clock cycles on a Cortex-M4 platform for key generation, encapsulation and decapsulation respectively. Our memory efficient software takes 4,786K, 6,328K, and 7,509K clock cycles on an ultra resource-constrained Cortex-M0 platform for key generation, encapsulation, and decapsulation respectively while consuming only 6.2 KB of memory at most. These results show that lattice-based key encapsulation schemes are perfectly practical for securing IoT devices from quantum computing attacks.
Expand
Jung Hee Cheon, Jinhyuck Jeong, Dongwoo Kim, Jongchan Lee
ePrint Report ePrint Report
After the concept of a Fuzzy Extractor (FE) was rst introduced by Dodis et al. , it has been regarded as one of the candidate solutions for key management utilizing biometric data. With a noisy input such as biometrics, FE generates a public helper value and a random secret key which is reproducible given another input similar to the original input. However, "helper values" may cause some leakage of information when generated repeatedly by correlated inputs, thus reusability should be considered as an important property. Recently, Canetti et al. (Eurocrypt 2016) proposed a FE satisfying both reusability and robustness with inputs from low-entropy distributions. Their strategy, the so-called Sample-then-Lock method, is to sample many partial strings from a noisy input string and to lock one secret key with each partial string independently. In this paper, modifying this reusable FE, we propose a new FE with size-reduced helper data hiring a threshold scheme. Our new FE also satis es both reusability and robustness, and requires much less storage memory than the original. To show the advantages of this scheme, we analyze and compare our scheme with the original in concrete parameters of the biometric, IrisCode. As a result, on 1024-bit inputs, with false rejection rate 0.5 and error tolerance 0.25, while the original requires about 1TB for each helper value, our scheme requires only 300MB with an additional 1.35GB of common data which can be used for all helper values.
Expand
◄ Previous Next ►