International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

29 May 2016

Benoît Cogliati, Yannick Seurin
ePrint Report ePrint Report
We propose a nonce-based MAC construction called EWCDM (Encrypted Wegman-Carter with Davies-Meyer), based on an almost xor-universal hash function and a block cipher, with the following properties: (i) it is simple and efficient, requiring only two calls to the block cipher, one of which can be carried out in parallel to the hash function computation; (ii) it is provably secure beyond the birthday bound when nonces are not reused; (iii) it provably retains security up to the birthday bound in case of nonce misuse. Our construction is a simple modification of the Encrypted Wegman-Carter construction, which is known to achieve only (i) and (iii) when based on a block cipher. Underlying our new construction is a new PRP-to-PRF conversion method coined Encrypted Davies-Meyer, which turns a pair of secret random permutations into a function which is provably indistinguishable from a perfectly random function up to at least $2^{2n/3}$ queries, where $n$ is the bit-length of the domain of the permutations.
Expand
Sanjam Garg, Akshayaram Srinivasan
ePrint Report ePrint Report
Functional Encryption ($\FE$) generalizes the notion of traditional encryption system by providing fine-grained access to data. In a FE scheme, the holder of a secret key $\SK_f$ (associated with a function $f$) and a ciphertext $c$ (encrypting plaintext $x$) can learn $f(x)$ but nothing else.

The indistinguishability ($\IND$) based security notion of $\FE$ can be parameterized based on whether the adversary obtains bounded/unbounded number of challenge ciphertexts, whether she is allowed to query bounded/unbounded number of functional secret keys or whether she is forced to commit to the challenge messages prior to seeing the public parameters (selective/adaptive). It is possible to weaken further the selective security requirement (called as weakly selective setting) where the adversary is restricted to make all the function secret key queries before seeing the public parameters. These notions can be formalized as $\{xx,yy,zzz\}$-$\mathsf{IND}$-$\FE$ where $xx$ denotes the number of challenge ciphertexts, $yy$ denotes the number of functional secret keys and $zzz$ denotes weakly selective ($\Sel^*$) or selective ($\sel$) or adaptive ($\adp$).

In this work, we show that {\em polynomially hard} $(1,1,\sel^*)$-$\mathsf{IND}$-$\FE$ having {\em weakly compact ciphertexts} implies all other notions {\em generically}. Prior results required sub-exponentially hard $(1,1,\sel^*)$-$\mathsf{IND}$-$\FE$ with weakly compact ciphertexts or polynomially hard $(1,\unb,\sel)$-$\mathsf{IND}$-$\FE$ to imply all other notions generically.
Expand
Jiang Zhang, Yu Chen, Zhenfeng Zhang
ePrint Report ePrint Report
Driven by the open problem raised by Hofheinz and Kiltz (Journal of Cryptology, 2012), we study the formalization of lattice-based programmable hash function (PHF), and give two types of constructions by using several techniques such as a novel combination of cover-free sets and lattice trapdoors. Under the Inhomogeneous Small Integer Solution (ISIS) assumption, we show that any (non-trivial) lattice-based PHF is collision-resistant, which gives a direct application of this new primitive. We further demonstrate the power of lattice-based PHF by giving generic constructions of signature and identity-based encryption (IBE) in the standard model, which not only provide a way to unify several previous lattice-based schemes using the partitioning proof techniques, but also allow us to obtain a new short signature scheme and a new fully secure IBE scheme with keys consisting of a logarithmic number of matrices/vectors in the security parameter $\kappa$. Besides, we also give a refined way of combining two concrete PHFs to construct an improved short signature scheme with short verification keys from weaker assumptions. In particular, our methods depart from the confined guessing technique of B\"ohl et al. (Eurocrypt'13) that was used to construct previous standard model short signature schemes with short verification keys by Ducas and Micciancio (Crypto'14) and by Alperin-Sheriff (PKC'15), and allow us to achieve existential unforgeability against chosen message attacks (EUF-CMA) without resorting to chameleon hash functions.
Expand
Daisuke Fujimoto, Shivam Bhasin, Makoto Nagata, Jean-Luc Danger
ePrint Report ePrint Report
Testing of electronic components is indispensable to minimize malfunction and failure of complex electronic systems. Currently, functionality and performance of these electronic components are the main parameters tested. However, validation of performance is not enough when the applications are safety or security critical. Therefore the security and trust of devices must also be tested before validation for such applications. In this paper, we promote the use of On-Chip Power noise Measurements (OCM), in order to test security using side-channel techniques. We then propose for the first time a standard side-channel measurement setup using OCM. Finally, we provide some key ideas on methodology to integrate the validation of hardware security and trust in the standard testing flow, exploiting OCM.
Expand
Frédéric Lafitte, Liran Lerman, Olivier Markowitch, Dirk Van Heule
ePrint Report ePrint Report
The CAESAR competition aims to provide a portfolio of authenticated encryption algorithms. SAT solvers represent powerful tools to verify automatically and efficiently (among others) the confidentiality and the authenticity of information claimed by cryptographic primitives. In this work, we study the security of the CAESAR candidate ACORN against a SAT-based cryptanalysis. We provide the first practical and efficient attacks on the first and the last versions of ACORN. More precisely, we achieve state recovery, key recovery, state collision as well as forgery attacks. All our results demonstrate the usefulness of SAT solvers to cryptanalyse all the candidates of the CAESAR competition, thereby accelerating the "test of time".
Expand
Franziskus Kiefer, Mark Manulis
ePrint Report ePrint Report
Two-Server Password Authenticated Key Exchange (2PAKE) protocols apply secret sharing techniques to achieve protection against server-compromise attacks. 2PAKE protocols eliminate the need for password hashing and remain secure as long as one of the servers remains honest. This concept has also been explored in connection with two-server password authenticated secret sharing (2PASS) protocols for which game-based and universally composable versions have been proposed. In contrast, universally composable PAKE protocols exist currently only in the single-server scenario and all proposed 2PAKE protocols use game-based security definitions.

In this paper we propose the first construction of an universally composable 2PAKE protocol, alongside with its ideal functionality. The protocol is proven UC-secure in the standard model, assuming a common reference string which is a common assumption to many UC-secure PAKE and PASS protocols. The proposed protocol remains secure for arbitrary password distributions. As one of the building blocks we define and construct a new cryptographic primitive, called Trapdoor Distributed Smooth Projective Hash Function (TD-SPHF), which could be of independent interest.
Expand
Benny Applebaum; Pavel Raykov
ePrint Report ePrint Report
\emph{Statistical Zero-knowledge proofs} (Goldwasser, Micali and Rackoff, SICOMP 1989) allow a computationally-unbounded server to convince a computationally-limited client that an input $x$ is in a language $\Pi$ without revealing any additional information about $x$ that the client cannot compute by herself. \emph{Randomized encoding} (RE) of functions (Ishai and Kushilevitz, FOCS 2000) allows a computationally-limited client to publish a single (randomized) message, $\enc(x)$, from which the server learns whether $x$ is in $\Pi$ and nothing else. It is known that $SRE$, the class of problems that admit statistically private randomized encoding with polynomial-time client and computationally-unbounded server, is contained in the class $SZK$ of problems that have statistical zero-knowledge proof. However, the exact relation between these two classes, and, in particular, the possibility of equivalence was left as an open problem. In this paper, we explore the relationship between $\SRE$ and $\SZK$, and derive the following results:

* In a non-uniform setting, statistical randomized encoding with one-side privacy ($1RE$) is equivalent to non-interactive statistical zero-knowledge ($NISZK$). These variants were studied in the past as natural relaxation/strengthening of the original notions. Our theorem shows that proving $SRE=SZK$ is equivalent to showing that $1RE=RE$ and $SZK=NISZK$. The latter is a well-known open problem (Goldreich, Sahai, Vadhan, CRYPTO 1999).

* If $SRE$ is non-trivial (not in $BPP$), then infinitely-often one-way functions exist. The analog hypothesis for $SZK$ yields only \emph{auxiliary-input} one-way functions (Ostrovsky, Structure in Complexity Theory, 1991), which is believed to be a significantly weaker implication.

* If there exists an average-case hard language with \emph{perfect randomized encoding}, then collision-resistance hash functions (CRH) exist. Again, a similar assumption for $SZK$ implies only constant-round statistically-hiding commitments, a primitive which seems weaker than CRH.

We believe that our results sharpen the relationship between $SRE$ and $SZK$ and illuminates the core differences between these two classes.
Expand
Vladimir Kolesnikov, Hugo Krawczyk, Yehuda Lindell, Alex J. Malozemoff, Tal Rabin
ePrint Report ePrint Report
Attribute-based methods provide authorization to parties based on whether their set of attributes (e.g., age, organization, etc.) fulfills a policy. In attribute-based encryption (ABE), authorized parties can decrypt, and in attribute-based credentials (ABC) systems, authorized parties can authenticate themselves. In this paper, we combine elements of ABE and ABC together with garbled circuits to construct \textsf{attribute-based key exchange} (ABKE). Our focus is on interactive solutions involving a client that holds a certificate (issued by an authority) vouching for that client's attributes and a server that holds a policy computable on such a set of attributes. The goal is for the server to establish a shared key with the client but only if the client's certified attributes satisfy the policy. Our solutions enjoy strong privacy guarantees for both the client and the server, including attribute privacy and unlinkability of client sessions. Our main contribution is a construction of ABKE for \emph{arbitrary circuits} with high (concrete) \emph{efficiency}. Specifically, we support general policies expressible as boolean circuits computed on a set of attributes. Even for policies containing hundreds of thousands of gates the performance cost is dominated by two pairing computations per policy input. Put another way, for a similar cost to prior ABE/ABC solutions, which can only support small formulas efficiently, we can support \emph{vastly} richer policies. We implemented our solution and report on its performance. For policies with 100,000 gates and 200 inputs over a realistic network, the server and client spend 957 ms and 176 ms on computation, respectively. When using offline preprocessing and batch signature verification, this drops to only 243 ms and 97 ms.
Expand
David McCann, Carolyn Whitnall, Elisabeth Oswald
ePrint Report ePrint Report
Power (as well as EM, cache and timing) leaks are a great cause for concern for developers who have to deal with cryptographic components as part of their overall software implementation, in particular in the context of embedded devices. Whilst there are some tools to detect timing and cache leaks, progress towards pinpointing power and EM leaks has been hampered by the limited information available about the physical components from which such leaks originate. We suggest a novel modelling technique that is capable of producing instruction-level power (and/or EM) models and use it to develop ELMO, the first leakage simulator for the ARM Cortex M0. We show that our methodology is capable of capturing differential data-dependent effects as neighbouring instructions in a sequence vary. We also explore register effects, and verify our models across several measurement boards to comment on board effects and portability. Finally we evaluate ELMO's performance and utility, and show that it can be used to spot even subtle leaks in implementations.
Expand
Ferucio Laurentiu Tiplea, George Teseleanu, Sorin Iftene, Anca-Maria Nica
ePrint Report ePrint Report
We revise Boneh-Gentry-Hamburg's identity-based encryption schemes and we show that we can renounce to the use of pseudo-random functions. We then prove IND-ID-CPA and ANON-IND-ID-CPA security of these schemes by showing that the advantage of any efficient adversary against these schemes is less than or equal to the quadratic residuosity advantage of some efficient adversary against the RSA generator. This greatly improves the existing upper bounds (being probably the tightest upper bound).
Expand
Mihai Barbulescu, Adrian Stratulat, Vlad Traista-Popescu, Emil Simion
ePrint Report ePrint Report
It is common knowledge that RSA can fail when used with weak random number generators. In this paper we present two algorithms that we used to find vulnerable public keys together with a simple procedure for recovering the private key from a broken public key. Our study focused on finding RSA keys with 512 and 1024 bit length, which are not considered safe, and finding a GCD is relatively fast. One database that we used in our study is made from 42 million public keys discovered when scanning TCP port 443 for raw X.509 certificates, between June 6, 2012 and August 4, 2013. Another database used in the study was made by crawling Github and retrieving the keys used by users to authenticate themselves when pushing to repositories they contribute to. We show that the percentage of broken keys with 512 bits is 3.7%, while the percentage of broken keys with 1024 bits is 0.05%. The smaller value is due to the fact that factorization of large numbers includes new prime numbers, unused in the small keys.
Expand
Yu Yu, Jiang Zhang
ePrint Report ePrint Report
Dodis, Kalai and Lovett (STOC 2009) initiated the study of the Learning Parity with Noise (LPN) problem with (static) exponentially hard-to-invert auxiliary input. In particular, they showed that under a new assumption (called Learning Subspace with Noise) the above is quasi-polynomially hard in the high (polynomially close to uniform) noise regime.

Inspired by the ``sampling from subspace'' technique by Yu (eprint 2009/467) and Goldwasser et al. (ITCS 2010), we show that standard LPN can work in a mode (reducible to itself) where the constant-noise LPN (by sampling its matrix from a random subspace) is robust against sub-exponentially hard-to-invert auxiliary input with comparable security to the underlying LPN. Plugging this into the framework of [DKL09], we obtain the same applications as considered in [DKL09] (i.e., CPA/CCA secure symmetric encryption schemes, average-case obfuscators, reusable and robust extractors) with resilience to a more general class of leakages, improved efficiency and better security under standard assumptions.

As a main contribution, under constant-noise LPN with certain sub-exponential hardness (i.e., \[2^{\omega(n^{1/2})}\] for secret size $n$) we obtain a variant of the LPN with security on poly-logarithmic entropy sources, which in turn implies CPA/CCA secure public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. Prior to this, basing PKE and OT on constant-noise LPN had been an open problem since Alekhnovich's work (FOCS 2003).
Expand
Michael Tunstall, Gilbert Goodwill
ePrint Report ePrint Report
Test Vector Leakage Assessment (TVLA) has been proposed as a method of determining if a side-channel attack is feasible, for a given implementation of a block cipher, by looking for leakage without conducting an attack. The thresholds chosen for the evaluation of leakage are chosen such that passing the tests gives a strong indication that no leakage is present. In this document, we describe how TVLA can be adapted to pubic key cryptographic algorithms, with a specific focus on RSA, ECDSA and ECDH.
Expand
Lucjan Hanzlik, Kamil Kluczniak
ePrint Report ePrint Report
In this short report we analyse the security of three schemes proposed by J. H. Park et al. in "Efficient Identity-Based Encryption and Public-Key Signature from Trapdoor Subgroups". The schemes make use of trapdoor subgroups of $\ZZ_n^*$ and are secure under new assumptions called $q$-Trapdoor Subgroup Diffie-Hellman (TSDH) and $q$-Trapdoor Subgroup Exponent Inversion (TSEI). We show that given several secret keys in case of IBE or several signatures in case of PKS, one can easily extract the trapdoor and break security of the proposed schemes.
Expand
Ran Canetti, Oxana Poburinnaya, Mariana Raykova
ePrint Report ePrint Report
Non-committing encryption (NCE) implements secure channels under adaptive corruptions in situations when data erasures are not trustworthy. In this paper we are interested in the rate of NCE, i.e. in how many bits the sender and receiver need to send per plaintext bit. In initial constructions (e.g. Canetti, Feige, Goldreich and Naor, STOC 96) the length of both the receiver message, namely the public key, and the sender message, namely the ciphertext, is m * poly(k) for an m-bit message, where k is the security parameter. Subsequent works improve efficiency significantly, achieving rate polylog(k). We construct the first constant-rate NCE. In fact, our scheme has rate 1+o(1), which is comparable to the rate of plain semantically secure encryption. Our scheme operates in the common reference string (CRS) model. Our CRS has size poly(m, k), but it is reusable for an arbitrary polynomial number of m-bit messages. In addition, it is the first NCE protocol with perfect correctness. We assume one way functions and indistinguishability obfuscation for circuits. As an essential step in our construction, we develop a technique for dealing with adversaries that modify the inputs to the protocol adaptively depending on a public key or CRS that contains obfuscated programs, while assuming only standard (polynomial) hardness of the obfuscation mechanism. This technique may well be useful elsewhere.
Expand

25 May 2016

University of Surrey, UK
Job Posting Job Posting
We are looking for an excellent, motivated, self-driven doctoral student to work in the exciting area of Internet of things (IoT) security and privacy focusing on developing new, smarter, and more agile security approaches. The PhD student will join the Secure Systems group (http://www.surrey.ac.uk/cs/research/) for three years, working in the area of secure and privacy-preserving communication and execution of the core IoT end-points including resource-constrained embedded devices, mobile phones (in the context of mobile sensing applications) and vehicles.

The PhD student is expected to hold at least an upper second class honours degree and preferably a Master\'s degree in CS, EE, Information Security or similar discipline. Basic knowledge of computer networking, network and embedded systems security is required, with some background in cryptography and native code based systems.

The position is funded for three years and is intended to start in July 2016 or as soon as possible thereafter.

Closing date for applications: 16 June 2016

Contact: Thanassis Giannetsos

Lecturer in Secure Systems

Department of Computer Science

University of Surrey

GU2 7XH

Email:a.giannetsos (at) surrey.ac.uk

More information: http://www.surrey.ac.uk/projects/phd-student-position-iot-security-and-privacy

Expand
IBM Research - Zurich, Switzerland
Job Posting Job Posting
The cryptography and privacy group of IBM Research - Zurich in Switzerland is looking for highly motivated Post-Docs and PhD students in the area of provably secure cryptography, in particular privacy-enhancing cryptographic protocols and public key cryptography.

Candidates will perform scientific research in the areas mentioned above under the direct guidance of the permanent researchers in our team. Their main task will consist of publishing at respected academic conferences and journals. They will occasionally implement prototypes of cryptographic schemes developped.

Post-Doc candidates must have a PhD degree in cryptography. All candidates must be fluent in spoken and written English; knowledge of German is not required.

PhD student candidates must have a Master degree or equivalent in computer science, mathematics, electrical engineering, or related disciplines. Knowledge of cryptography (in particular public key cryptography) is required, proven experience in the form of theses or published papers is a strong asset.

IBM Research - Zurich offers a stimulating international research environment among experts in computer science, physics, and mathematics. The laboratory is located in Rüschlikon, Switzerland, at 10km from the city of Zurich and on the coast of Lake Zurich. A competitive salary will be offered, adjusted to the high local living standards.

The positions will remain open until suitable candidates have been hired.

Closing date for applications: 1 October 2016

Contact: Interested candidates should send a CV and motivation letter to Jan Camenisch jca(at)zurich.ibm.com

More information: http://www.zurich.ibm.com/security/

Expand
Jean-Claude Bajard, Julien Eynard, Anwar Hasan, Vincent Zucca
ePrint Report ePrint Report
Since Gentry's breakthrough work in 2009, homomorphic cryptography has received a widespread attention. Implementation of a fully homomorphic cryptographic scheme is however still highly expensive. Somewhat Homomorphic Encryption (SHE) schemes, on the other hand, allow only a limited number of arithmetical operations in the encrypted domain, but are more practical. Many SHE schemes have been proposed, among which the most competitive ones rely on (Ring-) Learning With Error (RLWE) and operations occur on high-degree polynomials with large coefficients. This work focuses in particular on the Chinese Remainder Theorem representation (a.k.a. Residue Number Systems) applied to large coefficients. In SHE schemes like that of Fan and Vercauteren (FV), such a representation remains hardly compatible with procedures involving coefficient-wise division and rounding required in decryption and homomorphic multiplication. This paper suggests a way to entirely eliminate the need for multi-precision arithmetic, and presents techniques to enable a full RNS implementation of FV-like schemes. For dimensions between $2^{11}$ and $2^{15}$, we report speed-ups from $5\times$ to $20\times$ for decryption, and from $2\times$ to $4\times$ for multiplication.
Expand
Xiaoyang Dong, Xiaoyun Wang
ePrint Report ePrint Report
Since Knudsen and Rijmen proposed the $known$-$key$ attacks in ASIACRYPT 2007, the $open$-$key$ model becomes more and more popular. As the other component of the $open$-$key$ model, $chosen$-$key$ model was applied to the full attacks on AES-256 by Biryukov \emph{et al.} in CRYPTO 2009. In this paper, we explore how practically does the $chosen$-$key$ model affect the real-world cryptography and show that 11-round generic Feistel-SP block cipher is no longer safe in its hashing mode (MMO and MP mode) as there exists collision attacks. This work improves Sasaki and Yasuda's collision attacks by 2 rounds with two interesting techniques. First, we for the first time leverage the available degrees of freedom in the key to reduce the complexity of the inbound phase, which extends the previous 5-round inbound differential to a 7-round one. This results in a 12-round $chosen$-$key$ distinguisher of Feistel-SP block cipher. Second, inspired by the idea of Wang \emph{et al.}, we construct collisions using two blocks. The \emph{rebound attack} is used in the second compression function. We carefully tradeoff between the freedom of the first block and the complexity of the \emph{rebound attack}, and extends the $chosen$-$key$ attack to a 11-round collision attack on its hashing modes (MMO and MP mode).
Expand
Dominique Unruh
ePrint Report ePrint Report
We construct collapse-binding commitments in the standard model. Collapse-binding commitments were introduced by Unruh (Eurocrypt 2016) to model the computational-binding property of commitments against quantum adversaries, but only constructions in the random oracle model were known.

Furthermore, we show that collapse-binding commitments imply selected other security definitions for quantum commitments, answering an open question by Unruh (Eurocrypt 2016).
Expand
◄ Previous Next ►