International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

18 April 2017

Irene Giacomelli, Somesh Jha, C. David Page
ePrint Report ePrint Report
Linear regression is an important statistical tool that models the relationship between some explanatory values and an outcome value using a linear function. In many current applications (e.g. predictive modelling in personalized healthcare), these values represent sensitive data owned by several different parties that are unwilling to share them. In this setting, training a linear regression model becomes challenging and needs specific cryptographic solutions. In this work, we propose a new system that can train two different variants of linear regression (i.e. ridge regression and lasso regression) on a dataset obtained by merging a finite number of private datasets. Our system assures that no extra information on a single private dataset is revealed to the entities performing the learning algorithm. Moreover, our solution is based on efficient cryptographic tools (e.g. Paillier’s scheme and pseudorandom generator).
Expand
Razvan Barbulescu, Sylvain Duquesne
ePrint Report ePrint Report
Recent progress on NFS imposed a new estimation of the security of pairings. In this work we study the best attacks against some of the most popular pairings. It allows us to propose new pairing-friendly curves of 128 bits and 192 bits of security.
Expand
Charlotte Bonte, Carl Bootland, Joppe W. Bos, Wouter Castryck, Ilia Iliashenko, Frederik Vercauteren
ePrint Report ePrint Report
In this paper we present an encoding method for fixed-point numbers tailored for homomorphic function evaluation. The choice of the degree of the polynomial modulus used in all popular somewhat homomorphic encryption schemes is dominated by security considerations, while with the current encoding techniques the correctness requirement allows for much smaller values. We introduce a generic encoding method using expansions with respect to a non-integral base, which exploits this large degree at the benefit of reducing the growth of the coefficients when performing homomorphic operations. In practice this allows one to choose a smaller plaintext coefficient modulus which results in a significant reduction of the running time. We illustrate our approach by applying this encoding in the setting of homomorphic electricity load forecasting for the smart grid which results in a speed-up by a factor 13 compared to previous work, where encoding was done using balanced ternary expansions.
Expand
Christian Forler, Eik List, Stefan Lucks, Jakob Wenzel
ePrint Report ePrint Report
This work pursues the idea of multi-forgery attacks as introduced by Ferguson in 2002. We recoin reforgeability for the complexity of obtaining further forgeries once a first forgery has succeeded. First, we introduce a security notion for the integrity (in terms of reforgeability) of authenticated encryption schemes: j-Int-CTXT, which is derived from the notion INT-CTXT. Second, we define an attack scenario called j-IV-Collision Attack (j-IV-CA), wherein an adversary tries to construct j forgeries provided a first forgery. The term collision in the name stems from the fact that we assume the first forgery to be the result from an internal collision within the processing of the associated data and/or the nonce. Next, we analyze the resistance to j-IV-CAs of classical nonce-based AE schemes (CCM, CWC, EAX, GCM) as well as all 3rd-round candidates of the CAESAR competition. The analysis is done in the nonce-respecting and the nonce-ignoring setting. We find that none of the considered AE schemes provides full built-in resistance to j-IV-CAs. Based on this insight, we briefly discuss two alternative design strategies to resist j-IV-CAs.
Expand

17 April 2017

Daan Leermakers, Boris Skoric
ePrint Report ePrint Report
Quantum Key Recycling (QKR) is a quantum-cryptographic primitive that allows one to re-use keys in an unconditionally secure way. By removing the need to repeatedly generate new keys it improves communication efficiency. \v{S}kori\'{c} and de Vries recently proposed a QKR scheme based on 8-state encoding (four bases). It does not require quantum computers for encryption/decryption but only single-qubit operations. We provide a missing ingredient in the security analysis of this scheme in the case of noisy channels: accurate bounds on the privacy amplification. We determine optimal attacks against the message and against the key, for 8-state encoding as well as 4-state and 6-state conjugate coding. We show that the Shannon entropy analysis for 8-state encoding reduces to the analysis of Quantum Key Distribution, whereas 4-state and 6-state suffer from additional leaks that make them less effective. We also provide results in terms of the min-entropy. Overall, 8-state encoding yields the highest capacity.
Expand
Abhishek Jain, Yael Tauman Kalai, Dakshita Khurana, Ron Rothblum
ePrint Report ePrint Report
We devise a novel simulation technique that makes black-box use of the adversary as well as the distinguisher. Using this technique we construct several round-optimal protocols, many of which were previously unknown even using non-black-box simulation techniques:

- Two-round witness indistinguishable (WI) arguments for $\NP$ from different assumptions than previously known.

- Two-round arguments and three-round proofs of knowledge for $\NP$ that achieve strong WI, witness hiding (WH) and distributional weak zero knowledge (WZK) properties in a setting where the instance is only determined by the prover in the last round of the interaction. The soundness of these protocols is guaranteed against adaptive provers.

- Three-round two-party computation satisfying input-indistinguishable security as well as a weaker notion of simulation security against malicious adversaries.

- Three-round extractable commitments with guaranteed correctness of extraction from polynomial hardness assumptions.

Our three-round protocols can be based on DDH or QR or N^th residuosity and our two-round protocols require quasi-polynomial hardness of the same assumptions. In particular, prior to this work, two-round WI arguments for NP were only known based on assumptions such as the existence of trapdoor permutations, hardness assumptions on bilinear maps, or the existence of program obfuscation; we give the first construction based on (quasi-polynomial) DDH.

Our simulation technique bypasses known lower bounds on black-box simulation [Goldreich-Krawcyzk'96] by using the distinguisher's output in a meaningful way. We believe that this technique is likely to find more applications in the future.
Expand
Matteo Maffei (TU Wien); Giulio Malavolta (FAU); Manuel Reinert (CISPA, Saarland University); Dominique Schr\"oder (FAU)
ePrint Report ePrint Report
Oblivious RAM (ORAM) has emerged as an enabling technology to secure cloud-based storage services. The goal of this cryptographic primitive is to conceal not only the data but also the access patterns from the server. While the early constructions focused on a single client scenario, a few recent works have focused on a setting where multiple clients may access the same data, which is crucial to support data sharing applications. All these works, however, either do not consider malicious clients or they significantly constrain the definition of obliviousness and the system's practicality. It is thus an open question whether a natural definition of obliviousness can be enforced in a malicious multi-client setting and, if so, what the communication and computational lower bounds are.

In this work, we formalize the notion of maliciously secure multi-client ORAM, we prove that the server-side computational complexity of any secure realization has to be $\Omega(n)$, and we present a cryptographic instantiation of this primitive based on private information retrieval techniques, which achieves an $O(\sqrt{N})$ communication complexity. We further devise an efficient access control mechanism, built upon a novel and generally applicable realization of plaintext equivalence proofs for ciphertext vectors. Finally, we demonstrate how our lower bound can be bypassed by leveraging a trusted proxy, obtaining logarithmic communication and server-side computational complexity. We implemented our scheme and conducted an experimental evaluation, demonstrating the feasibility of our approach.
Expand
Debrup Chakraborty , Sebati Ghosh, Palash Sarkar
ePrint Report ePrint Report
We describe a non-recursive algorithm which can efficiently evaluate Bernstein-Rabin-Winograd polynomials with variable number of blocks.
Expand
Alan Szepieniec, Ward Beullens, Bart Preneel
ePrint Report ePrint Report
It is well known that multivariate quadratic (MQ) digital signature schemes have small signatures but huge public keys. However, in some settings, such as public key infrastructure (PKI), both variables are important. This paper explains how to transform any MQ signature scheme into one with a much smaller public key at the cost of a larger signature. The transformation aims to reduce the combined size of the public key and signature and this metric is improved significantly. The security of our transformation reduces to that of the underlying MQ signature scheme in the random oracle model. It is possible to decrease signature sizes even further but then its security is related to the conjectured hardness of a new problem, the Approximate MQ Problem (AMQ).
Expand
Manuel Barbosa, Dario Catalano, Dario Fiore
ePrint Report ePrint Report
We consider the problem of privacy-preserving processing of outsourced data, where a Cloud server stores data provided by one or multiple data providers and then is asked to compute several functions over it. We propose an efficient methodology that solves this problem with the guarantee that a honest-but-curious Cloud learns no information about the data and the receiver learns nothing more than the results. Our main contribution is the proposal and efficient instantiation of a new cryptographic primitive called Labeled Homomorphic Encryption (labHE). The fundamental insight underlying this new primitive is that homomorphic computation can be significantly accelerated whenever the program that is being computed over the encrypted data is known to the decrypter and is not secret---previous approaches to homomorphic encryption do not allow for such a trade-off. Our realization and implementation of labHE targets computations that can be described by degree-two multivariate polynomials, which capture an important range of statistical functions. As a specific application, we consider the problem of privacy preserving Genetic Association Studies (GAS), which require computing risk estimates for given traits from statistically relevant features in the human genome. Our approach allows performing GAS efficiently, non interactively and without compromising neither the privacy of patients nor potential intellectual property that test laboratories may want to protect.
Expand
Rolf Haenni, Reto E. Koenig, Philipp Locher, Eric Dubuis
ePrint Report ePrint Report
The goal of this document is to provide a self-contained, comprehensive, and fully-detailed specification of a new cryptographic voting protocol for the future system of the State of Geneva. The document should therefore describe every relevant aspect and every necessary technical detail of the computations and communications performed by the participants during the protocol execution. To support the general understanding of the cryptographic protocol, the document should also accommodate the necessary mathematical and cryptographic background information. By providing this information to the maximal possible extent, we see this document as the ultimate companion for the developers in charge of implementing the future Internet voting system of the State of Geneva. It may also serve as a manual for developers trying to implement an independent election verification software. The decision of making this document public will even enable implementations by third parties, for example by students trying to develop a clone of the Geneva system for scientific evaluations or to implement protocol extensions to achieve additional security properties. In any case, the target audience of this document are system designers, software developers, and cryptographic experts.
Expand
Srikanth ch, Veni Madhavan C.E., Kumar Swamy H.V.
ePrint Report ePrint Report
We propose a theory on the object: collection of arithmetic progressions with elements satisfying the property: $j^{th}$ terms of $i^{th}$ and $(i+1)^{th}$ progressions of the collection are multiplicative inverses of each other modulo the $(j+1)^{th}$ term of $i^{th}$ progression. Under certain conditions, such a collection is uniquely generated from a pair of co-prime seed integers. In this work, we present one application of this object to construction of a new family of pseudo random number generators (PRG). The amortized cost per bit is shown to be $O(1)$ for keystream generation. This result is supported by empirical data.

One interesting aspect of the defined object is its connection to the standard Euclidean algorithm for finding the gcd of two numbers. The connection is useful in our proof of the amortized cost, which is based on results concerning the average behavior of the quotient sequence of the Euclidean algorithm.

In security analysis, we study the difficulty of the inverse problem of recovering the seed pair from the keystream of the proposed method(s). Based on this study, we deduce a lower bound on the sizes for secret parameters that provide adequate security. The study of the inverse problem establishes a computational equivalence between a special case of it (defined as Problem A) and the problem of factoring integers.

We present an authenticated encryption scheme which is another application of the defined object. The present work leaves some open issues which we presently are addressing in our ongoing work.
Expand

15 April 2017

19 January 2018
Event Calendar Event Calendar
Event date: 19 January 2018
Submission deadline: 1 September 2017
Notification: 24 November 2017
Expand

14 April 2017

Taipei, Taiwan, 25 September - 25 September 2015
Event Calendar Event Calendar
Event date: 25 September to 25 September 2015
Submission deadline: 9 June 2017
Notification: 6 July 2017
Expand
Atsushi Takayasu, Yohei Watanabe
ePrint Report ePrint Report
A revocable identity-based encryption (RIBE) scheme, proposed by Boldyreva et al.\ (CCS'08), provides a revocation functionality for managing a number of users dynamically and efficiently. To capture a realistic scenario, Seo and Emura (PKC'13) introduced an additional important security notion, called {\em decryption key exposure resistance} (DKER), where an adversary is allowed to query short-term decryption keys.Although several RIBE schemes that satisfy DKER have been proposed, all the lattice-based RIBE schemes, e.g., Chen et al.'s scheme (ACISP'12), do not achieve DKER, since they basically do not have {\em the key re-randomization property}, which is considered to be an essential requirement for achieving DKER.In particular, in every existing lattice-based RIBE scheme, an adversary can easily recover plaintexts if the adversary is allowed to issue even a single short-term decryption key query.In this paper, we propose a new lattice-based RIBE scheme secure against exposure of a-priori bounded number of decryption keys (for every identity).We believe that this bounded notion is still meaningful and useful from a practical perspective. Technically, to achieve the bounded security without the key re-randomization property, key updates in our scheme are short vectors whose corresponding syndrome vector changes in each time period. For this approach to work correctly and for the scheme to be secure, {\em cover free families} play a crucial role in our construction.
Expand
Jun Xu, Santanu Sarkar, Lei Hu
ePrint Report ePrint Report
In this paper, we investigate the hardness of the approximate polynomial common divisor problem, which is regarded as a polynomial analogy of the approximate integer common divisor problem. In order to solve this problem, we present a simple method by using the polynomial lattice reduction algorithm and contain complete theoretical analyses. Further, we propose an improved lattice attack to reduce both space and time costs. Moreover, these two attacking methods can be directly applied to solving the noisy multipolynomial reconstruction problem in the field of error-correcting codes. On the basis of the above situations, our improved lattice attack performs fastest.
Expand
Dingfeng Ye, Peng Liu, Jun Xu
ePrint Report ePrint Report
Known approaches for obfuscating a circuit are only feasible in theory: the complexity polynomially depends on the security parameter and circuit measures, but with too large polynomials and/or holds only with large enough security parameters, which leaves the methods not implementable for almost all applications at a required security level, say 128 bits. In this work, we initiate the task of exploiting ideas from theoretical constructions towards practical obfuscation. The starting concern is: how much do empirical methods help to improve efficiency? We followed the approach of Zimmerman and Applebaum et al.: obfuscating the randomized encodings (RE) with Graded Encoding Scheme (GES) over composites. We gave a new design of RE which is based on a new pseudorandom function and a new garbled circuit from a pseudorandom generator, whose obfuscation only needs GES of degree linear with $n$, the number of input variables. We also developed various techniques that further reduce the degree by a significant constant factor. These resulted a general obfuscator with code size $\left((28\lambda |C|+ \frac{2^c}{c})10n\lambda\right)\mathrm{GES}(\frac{5n}{2c}+6,\lambda)$, where $\mathrm{GES}(\mu,\lambda)$ denotes the size of a single ring element of the Graded Encoding Scheme with multilinearity $\mu$ and security level $\lambda$. Based on our implementation of the required GES with a simplified CLT multilinear map, we may assume $\mathrm{GES}(\mu,\lambda) \approx \mu^2\lambda$. When $n=128$, we may get $\mu=31$; for example, our obfuscated AES will have code size $<10^{14}$ bits, whereas no implementable solution is known prior to this work. Our construction achieves VBB security if our pseudorandom function and pseudorandom generator and application of the CLT multilinear map are all secure.
Expand
Neriman Gamze Orhon, Huseyin Hisil
ePrint Report ePrint Report
This paper presents faster inversion-free point addition formulas for the curve y*(1+a*x^2)=c*x*(1+d*y^2). The proposed formulas improve the point doubling operation count record from 6M+5S to 8M and mixed-addition operation count record from 10M to 8M. Both sets of formulas are shown to be 4-way parallel, leading to an effective cost of 2M per either of the group operations.
Expand
Sikhar Patranabis, Debdeep Mukhopadhyay
ePrint Report ePrint Report
We solve the open problem of constructing \emph{computationally function private} public-key predicate encryption schemes. Existing public-key constructions for predicate encryption satisfy a \emph{statistical} notion of function privacy, that was introduced for equality predicates by Boneh, Raghunathan and Segev in CRYPTO'13, and was generalized for subspace-membership predicates in ASIACRYPT'13. The secret-keys in these constructions are \emph{statistically indistinguishable} from random as long the underlying predicates are sampled from sufficiently unpredictable distributions. The alternative notion of computational function privacy, where the secret-keys are \emph{computationally indistinguishable} from random, has only been concretely realized in the private-key setting, to the best of our knowledge.

\hspace*{5mm}In this paper, we present the first computationally function private constructions for public-key predicate encryption. Our framework for computational function privacy requires that a secret-key corresponding to a predicate sampled from a distribution with min-entropy super logarithmic in the security parameter $\lambda$, is \emph{computationally indistinguishable} from another secret-key corresponding to a uniformly and independently sampled predicate. Within this framework, we develop a novel approach, denoted as \emph{encrypt-augment-recover}, that takes an existing predicate encryption scheme and transforms it into a computationally function private one while retaining its original data privacy guarantees. Our approach leads to public-key constructions for identity-based encryption and inner-product encryption that are fully data private and computationally function private under a family of weaker variants of the DLIN assumption. Our constructions, in fact, satisfy an \emph{enhanced} notion of function privacy, requiring that an adversary learns nothing more than the minimum necessary from a secret-key, even given corresponding ciphertexts with attributes that allow successful decryption.
Expand
Sikhar Patranabis, Debdeep Mukhopadhyay
ePrint Report ePrint Report
Fine-grained access control, especially in shared data environments such as the cloud, is a common scenario. Suppose a data owner encrypts and stores a collection of $N$ documents in the cloud, and wishes to grant access to a subset $\mathcal{S}$ of these to a data user. The data user must therefore be able to perform search operations only on the subset $\mathcal{S}$ of documents, and nothing else. For example, the user may want to store the documents in separate folders pertaining to work, family, personal etc., and selectively grant search access to one or more folders among different users. This is a commonly encountered in document sharing environments such as Dropbox and Google Drive. Most existing searchable encryption schemes today assume that a user has search permissions over the entire document collection, and are hence not designed explicitly to address such a controlled-search scenario. The only previous work in this direction by Cui et al. possesses inherent security weaknesses that can be exploited to trivially break the privacy of their system. \noindent In this paper, we provide a novel, efficient and secure solution to this problem by introducing a new public-key searchable encryption primitive referred to as the key-aggregate searchable encryption~(KASE). KASE is secure under well-defined cryptographic assumptions, and requires optimal network communication between the server and the data owners/data users. KASE is the first public key searchable encryption scheme to achieve performance and security at par with the most efficient symmetric key searchable encryption constructions. Additionally, while all existing schemes produce trapdoors that grow linearly with the size of the document collection, KASE produces constant-size trapdoors that are independent of the document collection size. KASE is also efficient and scalable with respect to data updates, making it ideal for use in dynamic cloud-based search applications.
Expand
◄ Previous Next ►