International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

29 August 2017

Rahul Chatterjee, Joanne Woodage, Yuval Pnueli, Anusha Chowdhury, Thomas Ristenpart
ePrint Report ePrint Report
Password checking systems traditionally allow login only if the correct password is submitted. Recent work on typo-tolerant password checking suggests that usability can be improved, with negligible security loss, by allowing a small number of typographical errors. Existing systems, however, can only correct a handful of errors, such as accidentally leaving caps lock on or incorrect capitalization of the first letter in a password. This leaves out numerous kinds of typos made by users, such as transposition errors, substitutions, or capitalization errors elsewhere in a password. Some users, therefore, receive no benefit from existing typo-tolerance mechanisms.

We introduce personalized typo-tolerant password checking. In our approach, the authentication system learns over time the typos made by a specific user. In experiments using Mechanical Turk, we show that 45% of users would benefit from personalization. Therefore, we design a system, called TypTop, that securely implements personalized typo-tolerance. Underlying TypTop is a new stateful password-based encryption scheme that can be used to store recent failed login attempts. Our formal analysis shows that security in the face of an attacker that obtains the state of the system reduces to the difficulty of a brute-force dictionary attack against the real password. We implement TypTop for Linux and Mac OS login and report on a proof-of-concept deployment.
Expand

28 August 2017

Hao Chen, Kim Laine, Rachel Player, Yuhou Xia
ePrint Report ePrint Report
In most RLWE-based homomorphic encryption schemes the native plaintext elements are polynomials in a ring $\mathbb{Z}_t[x]/(x^n+1)$, where $n$ is a power of $2$, and $t$ an integer modulus. For performing integer or rational number arithmetic one typically uses an encoding scheme, which converts the inputs to polynomials, and allows the result of the homomorphic computation to be decoded to recover the result as an integer or rational number respectively. The problem is that the modulus $t$ often needs to be extremely large to prevent the plaintext polynomial coefficients from being reduced modulo~$t$ during the computation, which is a requirement for the decoding operation to work correctly. This results in larger noise growth, and prevents the evaluation of deep circuits, unless the encryption parameters are significantly increased.

We combine a trick of Hoffstein and Silverman, where the modulus $t$ is replaced by a polynomial $x-b$, with the Fan-Vercauteren homomorphic encryption scheme. This yields a new scheme with a very convenient plaintext space $\mathbb{Z}/(b^n+1)\mathbb{Z}$. We then show how rational numbers can be encoded as elements of this plaintext space, enabling homomorphic evaluation of deep circuits with high-precision rational number inputs. We perform a fair and detailed comparison to the Fan-Vercauteren scheme with the Non-Adjacent Form encoder, and find that the new scheme significantly outperforms this approach. For example, when the new scheme allows us to evaluate circuits of depth $9$ with $32$-bit integer inputs, in the same parameter setting the Fan-Vercauteren scheme only allows us to go up to depth $2$. We conclude by discussing how known applications can benefit from the new scheme.
Expand
Shuichi Katsumata
ePrint Report ePrint Report
Predicates are used in cryptography as a fundamental tool to control the disclosure of secrets. However, how to embed a particular predicate into a cryptographic primitive is usually not given much attention. In this work, we formalize the idea of encoding predicates as arithmetic circuits and observe that choosing the right encoding of a predicate may lead to an improvement in many aspects such as the efficiency of a scheme or the required hardness assumption. In particular, we develop two predicate encoding schemes with different properties and construct cryptographic primitives that benefit from these: verifiable random functions (VRFs) and predicate encryption (PE) schemes.

- We propose two VRFs on bilinear maps. Both of our schemes are secure under a non-interactive $Q$-type assumption where $Q$ is only poly-logarithmic in the security parameter, and they achieve either a poly-logarithmic verification key size or proof size. This is a significant improvement over prior works, where all previous schemes either require a strong hardness assumption or a large verification key and proof size.

- We propose a lattice-based PE scheme for the class of \emph{multi-dimensional equality} (MultEq) predicates. This class of predicate is expressive enough to capture many of the appealing applications that motivates PE schemes. Our scheme achieves the best in terms of the required approximation factor for LWE (we only require $\poly(\lambda)$) and the decryption time. In particular, all existing PE schemes that support the class of MultEq predicates either require a subexponential LWE assumption or an exponential decryption time (in the dimension of the MultEq predicates).
Expand
Shashank Agrawal, Melissa Chase
ePrint Report ePrint Report
Time and again, attribute-based encryption has been shown to be the natural cryptographic tool for building various types of conditional access systems with far-reaching applications, but the deployment of such systems has been very slow. A central issue is the lack of an encryption scheme that can operate on sensitive data very efficiently and, at the same time, provides features that are important in practice.

This paper proposes the first fully secure ciphertext-policy and key-policy ABE schemes based on a standard assumption on Type-III pairing groups, which do not put any restriction on policy type or attributes. We implement our schemes along with several other prominent ones using the Charm library, and demonstrate that they perform better on almost all parameters of interest.
Expand
Daniel Genkin, Luke Valenta, Yuval Yarom
ePrint Report ePrint Report
In recent years, applications increasingly adopt security primitives designed with better countermeasures against side channel attacks. A concrete example is Libgcrypt's implementation of ECDH encryption with Curve25519. The implementation employs the Montgomery ladder scalar-by-point multiplication, uses the unified, branchless Montgomery double-and-add formula and implements a constant-time argument swap within the ladder. However, Libgcrypt's field arithmetic operations are not implemented in a constant-time side-channel-resistant fashion.

Based on the secure design of Curve25519, users of the curve are advised that there is no need to perform validation of input points. In this work we demonstrate that when this recommendation is followed, the mathematical structure of Curve25519 facilitates the exploitation of side-channel weaknesses.

We demonstrate the effect of this vulnerability on three software applications---encrypted git, email and messaging---that use Libgcrypt. In each case, we show how to craft malicious OpenPGP files that use the Curve25519 point of order 4 as a chosen ciphertext to the ECDH encryption scheme. We find that the resulting interactions of the point at infinity, order-2, and order-4 elements in the Montgomery ladder scalar-by-point multiplication routine create side channel leakage that allows us to recover the private key in as few as 11 attempts to access such malicious files.
Expand
Raphael Bost, Brice Minaud, Olga Ohrimenko
ePrint Report ePrint Report
Using dynamic Searchable Symmetric Encryption, a user with limited storage resources can securely outsource a database to an untrusted server, in such a way that the database can still be searched and updated efficiently. For these schemes, it would be desirable that updates do not reveal any information a priori about the modifications they carry out, and that deleted results remain inaccessible to the server a posteriori. If the first property, called forward privacy, has been the main motivation of recent works, the second one, backward privacy, has been overlooked. In this paper, we study for the first time the notion of backward privacy for searchable encryption. After giving formal definitions for different flavors of backward privacy, we present several schemes achieving both forward and backward privacy, with various efficiency trade-offs. Our constructions crucially rely on primitives such as constrained pseudo-random functions and puncturable encryption schemes. Using these advanced cryptographic primitives allows for a fine-grained control of the power of the adversary, preventing her from evaluating functions on selected inputs, or decrypting specific ciphertexts. In turn, this high degree of control allows our SSE constructions to achieve the stronger forms of privacy outlined above. As an example, we present a framework to construct forward-private schemes from range-constrained pseudo-random functions. Finally, we provide experimental results for implementations of our schemes, and study their practical efficiency.
Expand
Zheng Li, Wenquan Bi, Xiaoyang Dong, Xiaoyun Wang
ePrint Report ePrint Report
Conditional cube attack is an efficient key-recovery attack on Keccak keyed modes proposed by Huang et al. at EUROCRYPT 2017. By assigning bit conditions, the diffusion of a conditional cube variable is reduced. Then, using a greedy algorithm (Algorithm 4 in Huang et al.'s paper), Huang et al. find some ordinary cube variables, that do not multiply together in the 1st round and do not multiply with the conditional cube variable in the 2nd round. Then the key-recovery attack is launched. The key part of conditional cube attack is to find enough ordinary cube variables. Note that, the greedy algorithm given by Huang et al. adds ordinary cube variable without considering its bad effect, i.e. the new ordinary cube variable may result in that many other variables could not be selected as ordinary cube variable (they multiply with the new ordinary cube variable in the first round).

In this paper, we bring out a new MILP model to solve the above problem. We show how to model the CP-like-kernel and model the way that the ordinary cube variables do not multiply together in the 1st round as well as do not multiply with the conditional cube variable in the 2nd round. Based on these modeling strategies, a series of linear inequalities are given to restrict the way to add an ordinary cube variable. Then, by choosing the objective function of the maximal number of ordinary cube variables, we convert Huang et al.'s greedy algorithm into an MILP problem and the maximal ordinary cube variables are found.

Using this new MILP tool, we improve Huang et al.'s key-recovery attacks on reduced-round Keccak-MAC-384 and Keccak-MAC-512 by 1 round, get the first 7-round and 6-round key-recovery attacks, respectively. For Ketje Major, we conclude that when the nonce is no less than 11 lanes, a 7-round key-recovery attack could be achieved. In addition, for Ketje Minor, we use conditional cube variable with 6-6-6 pattern to launch 7-round key-recovery attack.
Expand
Andrei Lapets, Mayank Varia, Azer Bestavros, Frederick Jansen
ePrint Report ePrint Report
Individuals and organizations face a tension between (1) the explosion in the amount of valuable data that can be collected and processed and (2) the liability of possession and the threat of exposure of data (which may be sensitive) due to malicious actors, criminal enterprises, and software errors. These threats can lead entities to protect their data throughout its lifecycle, discouraging them from sharing it (or even assessing if sharing has value). Consequently, opportunities to benefit from collaborative data analysis are lost. Secure multi-party computation (MPC) can recover these opportunities and empower both individuals and organizations to benefit from collective data aggregation and analysis in contexts where data sharing is encumbered by confidentiality concerns, legal restrictions, or corporate policies. Theoretical constructs for MPC have been known for 35 years, with several existing software frameworks designed over the past 10 years. Successful examples of deploying MPC for social good include tax fraud detection, disease surveillance, and pay equity assessment.

Our own experiences deploying MPC indicate that the technology is beyond ready for transition and deployment in the real world for appropriate scenarios and at suitable scales. MPC's benefits are often subtle and identifying compatible scenarios that would benefit from MPC is a multi-disciplinary array of challenges. Many difficulties and opportunities remain in terms of both the accessibility and the scalability of the candidate solutions for a given scenario. How can the community ensure that further research and development efforts lead to building blocks that will have the flexibility necessary to fit idiosyncratic real-world use cases?

Based on our insights, we advocate for the construction of a collection of production-quality, modular, open-source components that can support a broad ecosystem in which organizations and developers can rapidly spin up applications that employ MPC (or related technologies) to protect security-sensitive data, perform privacy-preserving computations, and enable new opportunities for collective data analysis that are currently inhibited or disincentivized by legal, institutional, or economic constraints. Such an ecosystem can allow and incentivize data owners and a diverse assortment of service providers to leverage sensitive data in deriving new insights that serve participant goals and/or the public interest.

In addition to its security benefits, the ecosystem vision facilitates separation of responsibilities and areas or expertise by decoupling the work of software engineers, lawyers, data analysts, communications infrastructure providers, cloud providers, and others. Crucial ingredients for a realization of such an ecosystem include modular design of functionalities that enable delivery of MPC services, composable security analyses of such functionalities, policy-agnostic programming and static analysis techniques that enable modularity and scalability, and accessible and scalable production-quality software applications that utilize MPC functionalities.
Expand
Gottfried Herold, Max Hoffmann, Michael Kloo\ss, Carla R\`afols, Andy Rupp
ePrint Report ePrint Report
Bilinear groups form the algebraic setting for a multitude of important cryptographic protocols including anonymous credentials, e-cash, e-voting, e-coupon, and loyalty systems. It is typical of such crypto protocols that participating parties need to repeatedly verify that certain equations over bilinear groups are satisfied, e.g., to check that computed signatures are valid, commitments can be opened, or non-interactive zero-knowledge proofs verify correctly. Depending on the form and number of equations this part can quickly become a performance bottleneck due to the costly evaluation of the bilinear map.

To ease this burden on the verifier, batch verification techniques have been proposed that allow to combine and check multiple equations probabilistically using less operations than checking each equation individually. In this work, we revisit the batch verification problem and existing standard techniques. We introduce a new technique which, in contrast to previous work, enables us to fully exploit the structure of certain systems of equations. Equations of the appropriate form naturally appear in many protocols, e.g., due to the use of Groth-Sahai proofs.

The beauty of our technique is that the underlying idea is pretty simple: we observe that many systems of equations can alternatively be viewed as a single equation of products of polynomials for which probabilistic polynomial identity testing following Schwartz-Zippel can be applied. Comparisons show that our approach can lead to significant improvements in terms of the number of pairing evaluations. Indeed, for the BeleniosRF voting system presented at CCS 2016, we can reduce the number of pairings (required for ballot verification) from $4k+140$, as originally reported by Chaidos et al., to $k+7$. As our implementation and benchmarks demonstrate, this may reduce the verification runtime to only $5\%$ to $13\%$ of the original runtime.
Expand
Pratish Datta, Ratna Dutta, Sourav Mukhopadhyay
ePrint Report ePrint Report
Attribute-based signatures (ABS) support restricted signing keys which allow signers to sign messages with respect to specific signing credentials or attributes that are accepted by the signing policies embedded within the signing keys. This paper presents the first ABS scheme supporting signing policies representable by Turing machines (TM), based on indistinguishability obfuscation (IO) for circuits. Our work places no constraint on the types of TM’s that can be associated with the signing keys in the sense that the TM’s can accept signing attribute strings of unbounded polynomial length and there is no limit on the description size or space complexity of the TM’s. Prior ABS schemes could support at most circuit-realizable signing policies, and consequently could deal with signing attribute strings whose lengths are fixed during setup.
Expand
Fukang Liu, Florian Mendel, Gaoli Wang
ePrint Report ePrint Report
In this paper, we propose an improved cryptanalysis of the double-branch hash function RIPEMD-160 standardized by ISO/IEC. Firstly, we show how to theoretically calculate the step differential probability of RIPEMD-160, which was stated as an open problem by Mendel et al. at ASIACRYPT 2013. Secondly, based on the method proposed by Mendel et al. to automatically find a differential path of RIPEMD-160, we construct a 30-step differential path where the left branch is sparse and the right branch is controlled as sparse as possible. To ensure the message modification techniques can be applied to RIPEMD-160, some extra bit conditions should be pre-deduced and well controlled. These extra bit conditions are used to ensure that the modular difference can be correctly propagated. This way, we can find a collision of 30-step RIPEMD-160 with complexity $2^{67}$. This is the first collision attack on round-reduced RIPEMD-160. Moreover, by a different choice of the message words to merge two branches and adding some conditions to the starting point, the semi-free-start collision attack on the first 36-step RIPEMD-160 from ASIACRYPT 2013 can be improved. However, the previous way to pre-compute the equation $T^{\lll S_0}\boxplus C_0=(T\boxplus C_1)^{\lll S_1}$ costs too much. To overcome this obstacle, we are inspired by Daum's et al. work on MD5 and describe a method to reduce the time complexity and memory complexity to pre-compute that equation. Combining all these techniques, the time complexity of the semi-free-start collision attack on the first 36-step RIPEMD-160 can be reduced by a factor of $2^{15.3}$ to $2^{55.1}$.
Expand

26 August 2017

Dresden, Germany, 19 March - 23 March 2018
Event Calendar Event Calendar
Event date: 19 March to 23 March 2018
Submission deadline: 10 September 2017
Expand
Sibenik, Croatia, 11 June - 15 June 2018
Event Calendar Event Calendar
Event date: 11 June to 15 June 2018
Expand
Singapore University of Technology and Design (SUTD)
Job Posting Job Posting
Singapore University of Technology and Design (SUTD) is a young university which was established in collaboration with MIT. iTrust is a Cyber Security Research Center with about 15 multi-discipline faculty members from SUTD. It has the world’s best facilities in cyber-physical systems (CPS) including testbeds for Secure Water Treatment (SWaT), Water Distribution (WADI), Electric Power and Intelligent Control (EPIC), and IoT.

I am looking for PhD interns with interest in cyber-physical system security (IoT, autonomous vehicle, and power grid etc.), especially on the topics such as 1) Lightweight and low-latency crypto algorithms for CPS devices, 2) Resilient authentication of devices and data in CPS, 3) Advanced SCADA firewall to filter more sophisticated attacking packets in CPS, 4) Big data based threat analytics for detection of both known and unknown threats, 5) Attack mitigation to increase the resilience of CPS. The attachment will be at least 3 months. Allowance will be provided for local expenses.

Interested candidates please send your CV with a research statement to Prof. Jianying Zhou.

Closing date for applications: 30 September 2017

Contact: jianying_zhou (at) sutd.edu.sg

More information: http://jianying.space/

Expand

25 August 2017

Election Election
The 2017 election is being held to fill the three IACR Director positions currently held by Masayuki Abe, Josh Benaloh, and Moti Yung, whose 3-year terms are ending. Nominations are due by September 24, 2017. Information about nomination is available at https://www.iacr.org/elections/2017/.
Expand
Vladimir Kolesnikov, Naor Matania, Benny Pinkas, Mike Rosulek, Ni Trieu
ePrint Report ePrint Report
We present a new paradigm for multi-party private set intersection (PSI) that allows $n$ parties to compute the intersection of their datasets without revealing any additional information. We explore a variety of instantiations of this paradigm. Our protocols avoid computationally expensive public-key operations and are secure in the presence of any number of semi-honest participants (i.e., without an honest majority).

We demonstrate the practicality of our protocols with an implementation. To the best of our knowledge, this is the first implementation of a multi-party PSI protocol. For 5 parties with data-sets of $2^{20}$ items each, our protocol requires only $72$ seconds. In an optimization achieving a slightly weaker variant of security (augmented semi-honest model), the same task requires only $22$ seconds.

The technical core of our protocol is oblivious evaluation of a {\em programmable} pseudorandom function (\OPPRF), which we instantiate in three different ways. We believe our new \OPPRF abstraction and constructions may be of independent interest.
Expand
Daniel Günther, Ágnes Kiss, Thomas Schneider
ePrint Report ePrint Report
A universal circuit (UC) can be programmed to simulate any circuit up to a given size $n$ by specifying its program bits. UCs have several applications, including private function evaluation (PFE). The asymptotical lower bound for the size of a UC is proven to be $\Omega(n\log n)$. In fact, Valiant (STOC'76) provided two theoretical UC constructions using so-called 2-way and 4-way constructions, with sizes $5n\log_2n$ and $4.75n\log_2n$, respectively. The 2-way UC has recently been brought into practice in concurrent and independent results by Kiss and Schneider (EUROCRYPT'16) and Lipmaa et al. (Eprint 2016/017). Moreover, the latter work generalized Valiant's construction to any $k$-way UC.

In this paper, we revisit Valiant's UC constructions and the recent results, and provide a modular and generic embedding algorithm for any $k$-way UC. Furthermore, we discuss the possibility for a more efficient UC based on a 3-way recursive strategy. We show with a counterexample that even though it is a promising approach, the 3-way UC does not yield an asymptotically better result than the 4-way UC. We propose a hybrid approach that combines the 2-way with the 4-way UC in order to minimize the size of the resulting UC. We elaborate on the concrete size and depth of all discussed UC constructions and show that our hybrid UC yields on average 3.65% improvement in size over the 2-way UC. We implement the 4-way UC in a modular manner based on our proposed embedding algorithm, and show that our methods for programming the UC can be generalized for any $k$-way construction.
Expand
Parvin Rastegari, Mehdi Berenjkoub
ePrint Report ePrint Report
In a designated verifier signature (DVS) scheme, the validity of the signature can only be verified by a designated entity chosen by the signer. Furthermore, the designated entity cannot convince a third party that the signature is generated by the signer. A multi-designated verifiers signature (MDVS) scheme is an extension of a DVS which is included of multiple designated verifiers. To the best of our knowledge, there are two existing patterns for an MDVS. In the first pattern, the cooperation of all designated verifiers is necessary for checking the validity of the signature. In the second pattern, every verifier of the set of designated verifiers can check the validity of the signature, independently. In this paper, we propose a generic new pattern for an MDVS in which a threshold number of the set of designated verifiers can check the validity of the signature. We present a concrete scheme and prove its security requirements in the standard model. Finally, we will propose some applications of this pattern.
Expand
Sikhar Patranabis, Debdeep Mukhopadhyay
ePrint Report ePrint Report
Hidden vector encryption (HVE), introduced by Boneh and Waters in TCC'07, is an expressive sub-class of predicate encryption, that allows conjunctive, subset, range and comparison queries over encrypted data. All existing HVE constructions in the cryptographic literature use bilinear pairings over either composite order or prime order groups. In this paper, we address the open problem of constructing a lightweight symmetric-key HVE scheme that does not use bilinear pairings, but only efficient cryptographic primitives such as pseudo-random functions (PRFs) and block ciphers. The relevance of this problem stems from the implementation and performance overheads for bilinear pairings over composite/prime order groups, which are significantly larger than that for PRFs and block ciphers, in both software and hardware. While lightweight symmetric-key constructions exist for keyword search on encrypted data, we aim to expand the scope of such constructions to support a richer set of query predicates. In this direction, we present the first lightweight symmetric-key HVE construction that does not use bilinear pairings. Our construction only uses a PRF and a PCPA-secure symmetric-key encryption algorithm, making it amenable to both hardware and software implementations in real-life resource-constrained environments. We prove the selective-simulation-security and adaptive-simulation-security of our construction in the standard model and ideal cipher model, respectively, against probabilistic polynomial-time adversaries that can make an unrestricted number of ciphertext generation and secret-key generation queries.
Expand
Zvika Brakerski, Rotem Tsabary, Vinod Vaikuntanathan, Hoeteck Wee
ePrint Report ePrint Report
In a constrained PRF, the owner of the PRF key K can generate constrained keys K_f that allow anyone to evaluate the PRF on inputs x that satisfy the predicate f (namely, where f(x) is “true”) but reveal no information about the PRF evaluation on the other inputs. A private constrained PRF goes further by requiring that the constrained key Kf hides the predicate f.

Boneh, Kim and Montgomery (EUROCRYPT 2017) presented a construction of private constrained PRF for point function constraints, and Canetti and Chen (EUROCRYPT 2017) presented a completely different construction for NC1 constraints. In this work, we show two constructions of LWE-based constraint-hiding constrained PRFs for general predicates described by polynomial-size circuits.

The two constructions are based on two distinct techniques that we show have further applicability by constructing weak attribute-hiding predicate encryption schemes. In a nutshell, the first construction imports the technique of modulus switching from the FHE world into the domain of trapdoor extension and homomorphism. The second construction shows how to use the duality between FHE secret-key/randomness and ABE randomness/secret-key to construct a scheme with dual use of the same values for both FHE and ABE purposes.
Expand
◄ Previous Next ►