International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

21 April 2020

University of Auckland, New Zealand
Job Posting Job Posting

Due to the potential threat of quantum computers, the research community is re-evaluating the security of a number of protocols and systems in widespread use. At the very least it is necessary to replace some common cryptographic building blocks with post-quantum alternatives. However, in some settings, the resulting systems may not be practical. It is therefore appropriate to reconsider, from the ground up, these protocols and systems. This PhD project will initiate a study of such protocols and systems. The project will leverage the NIST post-quantum standardization process to form a clear picture of the current state of post-quantum crypto. The project will develop new lightweight solutions for certain applications such as the internet of things (IoT).

The project will be supervised by Professor Steven Galbraith, together with other members of the Cyber Security Foundry at the University of Auckland.

Required skills and experience: Bachelor with honours, or Masters degree, in either Engineering, Computer Science or Mathematics. Good mathematical knowledge and understanding of rigorous mathematical thinking. Good knowledge of cryptography and information security. Programming skills. Good communication skills, both written and spoken.

  • Duration: 3 years
  • Value: International Student Fees + stipend of NZ$ 27,900 per year.
  • Application deadline: 20/5/2020
Application process:
  • Email your CV to Keshala De Silva, with the subject line "Application for PhD Studentship on Applications of post-quantum cryptography".
  • If you have written a master thesis or similar, then please email a pdf of it.
For more information about the PhD application process at Auckland visit:
  • https://www.auckland.ac.nz/en/study/study-options/find-a-study-option/mathematics/doctoral.html
  • https://www.auckland.ac.nz/en/study/applications-and-admissions/apply-now.html

    Closing date for applications:

    Contact: Steven Galbraith

Expand
Cryptanalysis Taskforce @ Nanyang Technological University, Singapore
Job Posting Job Posting

(Yes ! We are still hiring despite COVID-19)

The Cryptanalysis Taskforce at Nanyang Technological University in Singapore led by Prof. Jian Guo is seeking for candidates to fill 3 postdoctoral research fellow positions on symmetric-key cryptography, including but not limited to the following sub-areas:
  • tool aided cryptanalysis, such as MILP, CP, STP, and SAT
  • machine learning aided cryptanalysis and designs
  • privacy-preserving friendly symmetric-key designs
  • quantum cryptanalysis
  • cryptanalysis against SHA-3 and AES
Established in 2014, the Cryptanalysis Taskforce is a group dedicated for research in symmetric-key cryptography. Since then, the team has been active in both publications in and services for IACR. It has done quite some cryptanalysis work on various important targets such as SHA-3, and is expanding its interests to the areas mentioned above, with strong funding support from the university and government agencies in Singapore. We offer competitive salary package with extremely low tax, as well as excellent environment dedicating for research in Singapore. The contract will be initially for 2 years, and has the possibility to be extended. Candidates are expected to have proven record of publications in IACR conferences. Interested candidates are to send their CV and 2 reference letters to Jian Guo. Review of applicants will start immediately until the positions are filled. More information about the Cryptanalysis Taskforce research group can be found via http://team.crypto.sg.

Closing date for applications:

Contact: Asst Prof. Jian Guo, guojian@ntu.edu.sg

More information: http://team.crypto.sg

Expand

20 April 2020

NIO; San Jose, California
Job Posting Job Posting

Responsibilities

  • Design and build security products for connected and autonomous vehicles.
  • Research security problems and solutions related to vehicles and transportation
  • Design in-vehicle security mechanisms, such as secure vehicle network communication, on-car IDS/IPS, and firewall

Qualifications

  • Excellent in security fundamentals, such as network security, applied cryptography, server security, and end-point security
  • In-depth knowledge of Linux kernel and OS, and network protocols (TCP/IP, HTTP, MQTT, etc.)
  • Worked with Secure Boot on Arm or Aurix processors

Preferred Qualifications

  • Experience with Linux kernel hardening
  • Knowledge of CAN and vehicle system architecture
  • Knowledge of security of various wireless technologies (such as BLE and NFC)

Closing date for applications:

Contact:

Marisela Peifer: Sr Manager, People Ops & Talent

Marisela.Peifer@nio.io

More information: https://jobs.lever.co/nio/8f29bd44-663b-4de2-b6e2-9e596495d5b9

Expand
3 July 2020
Event Calendar Event Calendar
Event date: 3 July 2020
Submission deadline: 3 July 2020
Expand
Ittai Abraham, Kartik Nayak, Ling Ren, Nibesh Shrestha
ePrint Report ePrint Report
Synchronous consensus protocols, by definition, have a worst-case commit latency that depends on the bounded network delay. The notion of optimistic responsiveness was recently introduced to allow synchronous protocols to commit instantaneously when some optimistic conditions are met. In this work, we revisit this notion of optimistic responsiveness and present optimal latency results.

We present a lower bound for Byzantine Broadcast that relates the latencies of optimistic and synchronous commits when the designated sender is honest and while the optimistic commit can tolerate some faults. We then present two matching upper bounds for tolerating f faults out of n = 2f +1 parties. Our first upper bound result achieves optimal optimistic and synchronous commit latencies when the designated sender is honest and the optimistic commit can tolerate some faults. Our second upper bound result achieves optimal optimistic and synchronous commit latencies when the designated sender is honest but the optimistic commit does not tolerate any faults. The presence of matching lower and upper bound results make both of the results tight for n = 2f + 1. Our upper bound results are presented in a state machine replication setting with a steady state leader who is replaced with a view-change protocol when they do not make progress. For this setting, we also present an optimistically responsive protocol where the view-change protocol is optimistically responsive too.
Expand
Ahmad Almorabea
ePrint Report ePrint Report
Sharing a documents with a business partner is not always easy. since the sender often need to send sensitive information. and he want to ensure the integrity and the secrecy of the document. And in the same time. he wants to insure that only the specific individual or the recipients are the only one who can view it. So people tend to use some encryption software. or protecting the document with some sort of password. and then share the password with the recipient to make sure he is the only one who can view the document. But Unfortunately in many situations this method will not work. for a particular reason. and that is once the sender send an email. the email will start his journey into the company's network. and it will pass through many appliances. such Firewalls, Exchange servers and most likely Sandboxes. And there is one feature in sandboxes that we are interested in. once the sandbox sees an encrypted file or a protected file. it will immediately stop the email and quarantine it. because the sandbox couldn’t scan it. or couldn’t ensure if it’s malicious or not. so it will stop it for further analysis or a manual analysis depending on the procedures there. And such an action could stop a valid business transaction. and it could cause some business interruption. In this paper we will introduce a scheme for allowing the share of protected files. and analyzing them through Sandboxes. and in the same time no one can view it except for the authorized people.
Expand
Zhichun Lu, Runchao Han, Jiangshan Yu
ePrint Report ePrint Report
Payment Channel Networks (PCNs) have been a promising approach to scale blockchains. However, PCNs lack liquidity, as large-amount or multi-hop payments may fail. Payment griefing is one of the identified attacks on PCNs’ liquidity, where the payee withholds the preimage in Hash Time Locked Contract. Before this payment expires, coins involved in this payment cannot be used in other payments. We introduce Bankrun attack, which exploits payment griefing to bank run PCNs. Bankrun in finance means numerous clients withdraw their money from a bank, which makes the bank insolvent and even bankrupted. In our Bankrun attack, the attacker generates sybil nodes, establishes channels with hubs in the network, makes payments between his nodes and griefs them simultaneously. If the adversary has sufficient coins, he can lock a high percentage of coins in the PCN, so that the PCN may no longer handle normal payments. We introduce a framework for launching Bankrun attacks, and develop three strategies with a focus on minimising the cost, draining important channels, and locking most amount of coins, respectively. We evaluate the effectiveness of Bankrun attacks on Bitcoin’s Lightning Network, the first and most well-known PCN. Our evaluation results show that, using channels with 1.5% richest nodes, the attacker can lock 83% of the capacity in the entire network. With connections to these nodes, an adversary with 13% (∼77 BTC) of coins in the network can lock up to 45% (∼ 267 BTC) of coins in the entire network until time out (e.g. for an entire day); reduces the success rate of payments by 23.8%∼62.7%; increases fee of payments by 3.5%∼14.0%; and increases average attempts of payments by 26.4%∼113.7%, where payments range from 100,000 to 1,900,000 satoshi (7∼135 USD).
Expand
Daniel Apon, Ray Perlner, Angela Robinson, Paolo Santini
ePrint Report ePrint Report
We report on the concrete cryptanalysis of LEDAcrypt, a 2nd Round candidate in NIST's Post-Quantum Cryptography standardization process and one of 17 encryption schemes that remain as candidates for near-term standardization. LEDAcrypt consists of a public-key encryption scheme built from the McEliece paradigm and a key-encapsulation mechanism (KEM) built from the Niederreiter paradigm, both using a quasi-cyclic low-density parity-check (QC-LDPC) code.

In this work, we identify a large class of extremely weak keys and provide an algorithm to recover them. For example, we demonstrate how to recover 1 in $2^{47.72}$ of LEDAcrypt's keys using only $2^{18.72}$ guesses at the 256-bit security level. This is a major, practical break of LEDAcrypt.

Further, we demonstrate a continuum of progressively less weak keys (from extremely weak keys up to all keys) that can be recovered in substantially less work than previously known. This demonstrates that the imperfection of LEDAcrypt is fundamental to the system's design.
Expand
Thomas Pornin
ePrint Report ePrint Report
We present an optimization of Lagrange's algorithm for lattice basis reduction in dimension 2. The optimized algorithm is proven to be correct and to always terminate with quadratic complexity; it uses more iterations on average than Lagrange's algorithm, but each iteration is much simpler to implement, and faster. The achieved speed is such that it makes application of the speed-up on ECDSA and EC Schnorr signatures described by Antipa et al worthwhile, even for very fast curves such as Ed25519. We applied this technique to signature verification in Curve9767, and reduced verification time by 30 to 33% on both small (ARM Cortex M0+ and M4) and large (Intel Coffee Lake with AVX2) architectures.
Expand
F. Betül Durak, Loïs Huguenin-Dumittan, Serge Vaudenay
ePrint Report ePrint Report
We design a consecution of protocols which allows organizations to have secure strong access control of their users to their desktop machines based on biometry. It provides both strong secure authentication and privacy. Moreover, our mechanism allows the system admins to grant a various level of access to their end-users by fine tuning access control policy. Our system implements privacy-by-design. It separates biometric data from identity information. It is practical: we fully implemented our protocols as a proof of concept for a hospital. We use a 3D fingervein scanner to capture the biometric data of the user on a Raspberry Pi. For the biometry part, we developed an optimal way to aggregate scores using sequential distinguishers. It trades desired FAR and FRR against an average number of biometric captures.
Expand
Amit Behera, Or Sattath
ePrint Report ePrint Report
In a quantum money scheme, a bank can issue money that users cannot counterfeit. Similar to bills of paper money, most quantum money schemes assign a unique serial number to each money state, thus potentially compromising the privacy of the users of quantum money. However in a quantum coins scheme, just like the traditional currency coin scheme, all the money states are exact copies of each other, providing a better level of privacy for the users. A quantum money scheme can be private, i.e., only the bank can verify the money states, or public, meaning anyone can verify. In this work, we propose a way to lift any private quantum coin scheme -- which is known to exist based on the existence of one-way functions, due to Ji, Liu, and Song (CRYPTO'18) -- to a scheme that closely resembles a public quantum coin scheme. Verification of a new coin is done by comparing it to the coins the user already possesses, by using a projector on to the symmetric subspace. No public coin scheme was known prior to this work. It is also the first construction that is very close to a public quantum money scheme and is provably secure based on standard assumptions. The lifting technique when instantiated with the private quantum coins scheme, due to Mosca and Stebila 2010, gives rise to the first construction that is very close to an inefficient unconditionally secure public quantum money scheme.
Expand
Hao Chen, Miran Kim, Ilya Razenshteyn, Dragos Rotaru, Yongsoo Song, Sameer Wagh
ePrint Report ePrint Report
Computing on data in a manner that preserve the privacy is of growing importance. Secure Multi-Party Computation (MPC) and Homomorphic Encryption (HE) are two cryptographic techniques for privacy-preserving computations. In this work, we have developed efficient UC-secure multiparty protocols for matrix multiplications and two-dimensional convolutions. We built upon the SPDZ framework and integrated the state-of-the-art HE algorithms for matrix multiplication. We also optimized the zero-knowledge proofs and the ``sacrifice'' step of SPDZ to further improve efficiency. As a result, our protocol achieved communication cost linear only on the input and output dimensions and not on the number of multiplication operations. We implemented our protocols and benchmarked them against the SPDZ LowGear variant (Keller et al. Eurocrypt'18). For multiplying two square matrices of size 128, we reduced the communication cost from 1.54 GB to 12.46 MB, an improvement of over two orders of magnitude that only improves with larger matrix sizes. For evaluating all convolution layers of the ResNet-50 neural network, we reduced the communication cost from 5 TB to 41 GB.
Expand
Kristian L. McDonald
ePrint Report ePrint Report
Pointcheval-Sanders (PS) signatures are well-studied in the literature and have found use within e.g. threshold credential schemes and redactable anonymous credential schemes. The present work leverages a mapping between PS signatures and a related class of polynomial-based signatures to construct multiple new signature/credential schemes. Specifically, new protocols for multi-message signatures, sequential aggregate signatures, signatures for message commitments, redactable signatures, and unlinkable redactable signatures are presented. A redactable anonymous credential scheme is also constructed. All original protocols employ constant-sized secret keys rather than linear-sized (in the number of messages/attributes). Security properties of the new protocols are analysed and a general discussion of security properties for both PS signatures and the new schemes is provided.
Expand
Kristian L. McDonald
ePrint Report ePrint Report
Variant secret sharing schemes deriving from Shamir's threshold secret sharing protocol are presented. Results include multi-secret sharing protocols using shares with $O(1)$ elements, independent of the number of secrets. The new schemes achieve a weaker notion of security (they're secure rather than strongly secure) but maintain a property called $K$-privacy (inspired by $k$-anonymity). $K$-privacy ensures that all secrets remain private with respect to a subset of the secret space, though the particular subset providing privacy may vary among adversaries that acquire distinct sub-threshold sets of shares. Depending on the number of secrets and the protocol details, secure $K$-private multi-secret sharing schemes may be ``almost'' strongly secure or may remain merely secure and $K$-private - a difference captured by the notion of $K$-security. Novel applications of the multi-secret sharing schemes are presented, realising a primitive called a switched threshold signature. Switched threshold signatures have the quirky property that aggregating a threshold number of signatures of one type (e.g. Pointcheval-Sanders signatures) ``switches'' the signatures into a master signature of a different type. Collectively these results may permit efficiencies within, e.g., threshold credential issuance protocols.
Expand
Amir Jafari, Shahram Khazaei
ePrint Report ePrint Report
The information ratio of an access structure is an important parameter for quantifying the efficiency of the best secret sharing scheme (SSS) realizing it. The most common security notion is perfect security. The following relaxations, in increasing level of security, have been presented in the literature: quasi-perfect, almost-perfect and statistical. Understanding the power of relaxing the correctness and privacy requirements in the efficiency of SSSs is a long-standing open problem.

In this article, we introduce and study an extremely relaxed security notion, called partial security, for which it is only required that any qualified set gains strictly more information about the secret than any unqualified one. To compensate the extreme imperfection, we quantify the efficiency of such schemes using a parameter called partial information ratio. Despite our compensation, partial security turns out weaker than the weakest mentioned non-perfect security notion, i.e., quasi-perfect security.

We present three main results in this paper. First, we prove that partial and perfect information ratios coincide for the class of linear SSSs. Consequently, for this class, information ratio is invariant with respect to all security notions. Second, by viewing a partial SSS as a wiretap channel, we prove that for the general (i.e., non-linear) class of SSSs, partial and statistical information ratios are equal. Consequently, for this class, information ratio is invariant with respect to all non-perfect security notions. Third, we show that partial and almost-perfect information ratios do not coincide for the class of mixed-linear schemes (i.e., schemes constructed by combining linear schemes with different underlying finite fields).

Our first result strengthens the previous decomposition theorems for constructing perfect linear schemes. Our second result leads to a very strong decomposition theorem for constructing general (i.e., non-linear) statistical schemes. Our third result provides a rare example of the effect of imperfection on the efficiency of SSSs for a certain class of schemes.
Expand
Asma Aloufi, Peizhao Hu, Yongsoo Song, and Kristin Lauter
ePrint Report ePrint Report
New cryptographic techniques such as homomorphic encryption (HE) allow computations to be outsourced to and evaluated blindfolded in a resourceful cloud. These computations often require private data owned by multiple participants, engaging in joint evaluation of some functions. For example, Genome-Wide Association Study (GWAS) is becoming feasible because of recent proliferation of genome sequencing technology. Due to the sensitivity of genomic data, these data should be encrypted using different keys. However, supporting computation on ciphertexts encrypted under multiple keys is a non-trivial task. In this paper, we present a comprehensive survey on different state-of-the-art cryptographic techniques and schemes that are commonly used. We review techniques and schemes including Attribute-Based Encryption (ABE), Proxy Re-Encryption (PRE), Threshold Homomorphic Encryption (ThHE), and Multi-Key Homomorphic Encryption (MKHE). We analyze them based on different system and security models and examine their complexities. We share lessons learned and draw observations for designing better schemes with reduced overheads.
Expand

19 April 2020

Tim Fritzmann, Georg Sigl, Johanna Sepúlveda
ePrint Report ePrint Report
Empowering electronic devices to support Post-Quantum Cryptography (PQC) is a challenging task. Compared with traditional cryptography, PQC introduces new mathematical elements and operations which are usually not easy to implement on standard CPU architectures. Especially for low cost and resource constraint devices, hardware acceleration is absolutely required. In addition, as the standardization process of PQC is still ongoing, a focus on maintaining crypto-agility is mandatory. To cope with such requirements, Hardware/Software Co-Design techniques have been recently used for developing complex and highly customized PQC solutions. However, while most of the previous works have developed loosely coupled PQC accelerators, the design of tightly coupled accelerators and Instruction Set Architecture (ISA) extensions for PQC have been barely explored. To this end, we present RISQ-V, an enhanced RISC-V architecture that integrates a set of powerful tightly coupled accelerators to speed up lattice-based PQC. RISQ-V efficiently reuses processor resources and reduces the amount of memory accesses. This significantly increases the performance while keeping the silicon area overhead low. We present three contributions. First, we propose a set of powerful hardware accelerators deeply integrated into the RISC-V pipeline. Second, we extended the RISC-V ISA with 28 new instructions to efficiently perform operations for lattice-based cryptography. Third, we implemented our RISQ-V in ASIC technology and on FPGA. We evaluated the performance of NewHope, Kyber, and Saber on RISQ-V. Compared to the pure software implementation on RISC-V, our Co-Design implementations show a speed up factor of up to 10.5 for NewHope, 9.6 for Kyber, and 2.7 for Saber. For the ASIC implementation, the energy consumption was reduced by factors of up to 8.8 for NewHope, 7.7 for Kyber, and 2.1 for Saber. The cell count of the CPU was increased by a factor of 1.6 compared to the original RISC-V design, which can be considered as a moderate increase for the achieved performance gain.
Expand
Thomas Agrikola, Geoffroy Couteau, Yuval Ishai, Stanislaw Jarecki, Amit Sahai
ePrint Report ePrint Report
We initiate a systematic study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, "honey encryption" and steganography.

The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings and relate this question to problems in other areas of cryptography. In particular, by establishing a two-way relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multi-party computation and questions in the domain of steganography.
Expand
Satō Shinichi
ePrint Report ePrint Report
This paper revisits the Abe--Okamoto signature scheme to present a version of their signature scheme augmented with modern best practices, with major influences taken from EdDSA. Implementation guidance is given on how to reuse existing Ed25519 code.
Expand
Okan Seker, Thomas Eisenbarth, Maciej Liskiewicz
ePrint Report ePrint Report
White-box cryptography attempts to protect cryptographic secrets in pure software implementations. Due to its high utility, white-box cryptosystems (WBC) are deployed even though their secure construction is not well understood. A major breakthrough in generic cryptanalysis of WBC was Differential Computation Analysis (DCA), which requires minimal knowledge of the underlying white-box protection and also thwarts many obfuscation methods. To avert DCA, classic masking countermeasures originally intended to protect against highly related side channel attacks have been proposed for use in WBC. However, due to the controlled environment of WBCs, new algebraic attacks able to break all classic masking schemes have quickly been found. These algebraic DCA attacks break classic masking countermeasures efficiently, as they are independent of the masking order.

In this work, we propose a novel generic masking scheme that can resist both DCA and algebraic attacks. The proposed scheme extends the seminal work by Ishai et al. which is probing secure and thus resists DCA, to also resist algebraic attacks. To prove the security of our scheme, we demonstrate the connection between two main security notions in white-box cryptography: Side Channel Analysis (SCA) security and prediction security. Resistance of our masking scheme to DCA is proven for an arbitrary order of protection. Our masking scheme also resists algebraic attacks, which we show concretely for first and second order algebraic protection, and show how it can be generalized to any order. Moreover, we present an extensive performance analysis and quantify the overhead of our scheme, for a proof-of-concept protection of an AES implementation.
Expand
◄ Previous Next ►