International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

29 November 2021

Sujoy Sinha Roy, Ahmet Can Mert, Aikata, Sunmin Kwon, Youngsam Shin, Donghoon Yoo
ePrint Report ePrint Report
Fully homomorphic encryption enables computation on encrypted data, and hence it has a great potential in privacy-preserving outsourcing of computations. In this paper, we present a complete instruction-set processor architecture ‘Medha’ for accelerating the cloud-side operations of an RNS variant of the HEAAN homomorphic encryption scheme. Medha has been designed following a modular hardware design approach to attain a fast computation time for computationally expensive homomorphic operations on encrypted data. At every level of the implementation hierarchy, we explore possibilities for parallel processing. Starting from hardware-friendly parallel algorithms for the basic building blocks, we gradually build heavily parallel RNS polynomial arithmetic units. Next, many of these parallel units are interconnected elegantly so that their interconnections require the minimum number of nets, therefore making the overall architecture placement-friendly on the implementation platform. As homomorphic encryption is computation- as well as data-centric, the speed of homomorphic evaluations depends greatly on the way the data variables are handled. For Medha, we take a memory-conservative design approach and get rid of any off-chip memory access during homomorphic evaluations.

Our instruction-set accelerator Medha is programmable and it supports all homomorphic evaluation routines of the leveled fully RNS-HEAAN scheme. For a reasonably large parameter with the polynomial ring dimension 214 and ciphertext coefficient modulus 438-bit (corresponding to 128-bit security), we implemented Medha in a Xilinx Alveo U250 card. Medha achieves the fastest computation latency to date and is almost 2.4× faster in latency and also somewhat smaller in area than a state-of-the-art reconfigurable hardware accelerator for the same parameter.
Expand
Clémence Chevignard, Rémi Géraud-Stewart, Antoine Houssais, David Naccache, Edmond de Roffignac
ePrint Report ePrint Report
Consider some user buying software or hardware from a provider. The provider claims to have subjected this product to a number of tests, ensuring that the system operates nominally. How can the user check this claim without running all the tests anew?

The problem is similar to checking a mathematical conjecture. Many authors report having checked a conjecture $C(x)=\mbox{True}$ for all $x$ in some large set or interval $U$. How can mathematicians challenge this claim without performing all the expensive computations again?

This article describes a non-interactive protocol in which the prover provides (a digest of) the computational trace resulting from processing $x$, for randomly chosen $x \in U$. With appropriate care, this information can be used by the verifier to determine how likely it is that the prover actually checked $C(x)$ over $U$.

Unlike ``traditional'' interactive proof and probabilistically-checkable proof systems, the protocol is not limited to restricted complexity classes, nor does it require an expensive transformation of programs being executed into circuits or ad-hoc languages. The flip side is that it is restricted to checking assertions that we dub ``\emph{refutation-precious}'': expected to always hold true, and such that the benefit resulting from reporting a counterexample far outweighs the cost of computing $C(x)$ over all of $U$.
Expand
Tasopoulos George, Jinhui Li, Apostolos P. Fournaris, Raymond K. Zhao, Amin Sakzad, Ron Steinfeld
ePrint Report ePrint Report
Transport Layer Security (TLS) constitutes one of the most widely used protocols for securing Internet communication and has found broad acceptance also in the Internet of Things (IoT) domain. As we progress towards a security environment resistant against quantum computer attacks, TLS needs to be transformed in order to support post-quantum cryptography schemes. However, post-quantum TLS is still not standardized and its overall performance, especially in resource constrained, IoT capable, embedded devices is not well understood. In this paper, we evaluate the time, memory and energy requirements of a post-quantum variant of TLS version 1.3 (PQ TLS 1.3), by integrating the pqm4 library implementations of NIST round 3 post-quantum algorithms Kyber, Saber, Dilithium and Falcon into the popular wolfSSL TLS 1.3 library. In particular, our experiments focus on low end, resource constrained embedded devices manifested in the ARM Cortex-M4 embedded platform NUCLEO-F439ZI (with hardware cryptographic accelerator) and NUCLEO-F429ZI (without hardware cryptographic accelerator) boards. These two boards only provide $180$ MHz clock rate, $2$ MB Flash Memory and $256$ KB SRAM. To the authors' knowledge this is the first thorough time delay, memory usage and energy consumption PQ TLS 1.3 evaluation using the NIST round 3 finalist algorithms for resource constrained embedded systems with and without cryptography hardware acceleration. The paper's results show that the post-quantum signatures Dilithium and Falcon and post-quantum KEMs Kyber and Saber perform in general well in TLS 1.3 on embedded devices in terms of both TLS handshake time and energy consumption. There is no significant difference between the TLS handshake time of Kyber and Saber; However, the handshake time with Falcon is much lower than that with Dilithium. In addition, hardware cryptographic accelerator for symmetric-key primitives improves the performances of TLS handshake time by about 6% on the client side and even by 19% on the server side, on high security levels.
Expand
Jipeng Zhang, Junhao Huang, Zhe Liu, Sujoy Sinha Roy
ePrint Report ePrint Report
Saber is a module-lattice-based key encapsulation scheme that has been selected as a finalist in the NIST Post-Quantum Cryptography Standardization Project. As Saber computes on considerably large matrices and vectors of polynomials, its efficient implementation on memory-constrained IoT devices is very challenging. In this paper, we present an implementation of Saber with a minor tweak to the original Saber protocol for achieving reduced memory consumption and better performance. We call this tweaked implementation `Saber+', and the difference compared to Saber is that we use different generation methods of public matrix \(\boldsymbol{A}\) and secret vector \(\boldsymbol{s}\) for memory optimization. Our highly optimized software implementation of Saber+ on a memory-constrained RISC-V platform achieves 48\% performance improvement compared with the best state-of-the-art memory-optimized implementation of original Saber. Specifically, we present various memory and performance optimizations for Saber+ on a memory-constrained RISC-V microcontroller, with merely 16KB of memory available. We utilize the Number Theoretic Transform (NTT) to speed up the polynomial multiplication in Saber+. For optimizing cycle counts and memory consumption during NTT, we carefully compare the efficiency of the complete and incomplete-NTTs, with platform-specific optimization. We implement 4-layers merging in the complete-NTT and 3-layers merging in the 6-layer incomplete-NTT. An improved on-the-fly generation strategy of the public matrix and secret vector in Saber+ results in low memory footprint. Furthermore, by combining different optimization strategies, various time-memory trade-offs are explored. Our software implementation for Saber+ on selected RISC-V core takes just 3,809K, 3,594K, and 3,193K clock cycles for key generation, encapsulation, and decapsulation, respectively, while consuming only 4.8KB of stack at most.
Expand
Ziaur Rahman, Xun Yi, Ibrahim Khalil, Andrei Kelarev
ePrint Report ePrint Report
The world has been experiencing a mind-blowing expansion of blockchain technology since it was first introduced as an emerging means of cryptocurrency called bitcoin. Currently, it has been regarded as a pervasive frame of reference across almost all research domains, ranging from virtual cash to agriculture or even supply-chain to the Internet of Things. The ability to have a self-administering register with legitimate immutability makes blockchain appealing for the Internet of Things (IoT). As billions of IoT devices are now online in distributed fashion, the huge challenges and questions require to addressed in pursuit of urgently needed solutions. The present paper has been motivated by the aim of facilitating such efforts. The contribution of this work is to figure out those trade-offs the IoT ecosystem usually encounters because of the wrong choice of blockchain technology. Unlike a survey or review, the critical findings of this paper target sorting out specific security challenges of blockchain-IoT Infrastructure. The contribution includes how to direct developers and researchers in this domain to pick out the unblemished combinations of Blockchain enabled IoT applications. In addition, the paper promises to bring a deep insight on Ethereum, Hyperledger blockchain and IOTA technology to show their limitations and prospects in terms of performance and scalability.
Expand
Ziaur Rahman andIbrahim Khalil, Mousumi Sumi
ePrint Report ePrint Report
Several efforts have been seen claiming the lightweight block ciphers as a necessarily suitable substitute in securing the Internet of Things. Currently, it has been able to envisage as a pervasive frame of reference almost all across the privacy preserving of smart and sensor-oriented appliances. Different approaches are likely to be inefficient, bringing desired degree of security considering the easiness and surely the process of simplicity but security. Strengthening the well-known symmetric key and block dependent algorithm using either chaos motivated logistic map or elliptic curve has shown a far-reaching potential to be a discretion in secure real-time communication. The popular feature of logistic maps, such as the un-foreseeability and randomness often expected to be used in dynamic key-propagation in sync with chaos and scheduling technique towards data integrity. As a bit alternation in keys, able to come up with oversize deviation, also would have consequence to leverage data confidentiality. Henceforth it may have proximity to time consumption, which may lead to a challenge to make sure instant data exchange between participating node entities. In consideration of delay latency required to both secure encryption and decryption, the proposed approach suggests a modification on the key-origination matrix along with S-box. It has plausibly been taken us to this point that the time required proportionate to the plain-text sent while the plain-text disproportionate to the probability happening a letter on the message made. In line with that the effort so far sought how apparent chaos escalates the desired key-initiation before message transmission.
Expand
Mariana Botelho da Gama, John Cartlidge, Antigoni Polychroniadou, Nigel P. Smart, Younes Talibi Alaoui
ePrint Report ePrint Report
We examine bucket-based and volume-based algorithms for privacy-preserving asset trading in a financial dark pool. Our bucket-based algorithm places orders in quantised buckets, whereas the volume-based algorithm allows any volume size but requires more complex validation mechanisms. In all cases, we conclude that these algorithms are highly efficient and offer a practical solution to the commercial problem of preserving privacy of order information in a dark pool trading venue.
Expand
Huck Bennett, Atul Ganju, Pura Peetathawatchai, Noah Stephens-Davidowitz
ePrint Report ePrint Report
We study the computational problem of finding a shortest non-zero vector in a rotation of $\mathbb{Z}^n$, which we call $\mathbb{Z}$SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for $\mathbb{Z}$SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain special cases. However, despite all of this work, the fastest known algorithm that is proven to solve $\mathbb{Z}$SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in $2^{n + o(n)}$ time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for $\mathbb{Z}$SVP and instead ask what else we can say about the problem. E.g, can we find any non-trivial speedup over the best known SVP algorithm? And, what consequences would follow if $\mathbb{Z}$SVP actually is hard? Our results are as follows.

1) We show that $\mathbb{Z}$SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce $\mathbb{Z}$SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of $\mathbb{Z}$SVP from SVP. As a consequence of this reduction, we obtain a $2^{0.802n}$-time algorithm for $\mathbb{Z}$SVP, i.e., a non-trivial speedup over the best known algorithm for SVP on general lattices.

2) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) $\mathbb{Z}$SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of $\mathbb{Z}^n$ from either a lattice with all non-zero vectors longer than $\sqrt{n/\log n}$ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of $\mathbb{Z}^n$. The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that ``$\mathbb{Z}^n$ has the largest smoothing parameter.''

3) We show a distribution of bases $B$ for rotations of $\mathbb{Z}^n$ such that, if $\mathbb{Z}$SVP is hard for any input basis, then $\mathbb{Z}$SVP is hard on input $B$. This gives a satisfying theoretical resolution to the problem of sampling hard bases for $\mathbb{Z}^n$, which was studied by Blanks and Miller (PQCrypto, 2021). This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices (ia.cr/2021/1332), and they also used this to analyze the security of a public-key encryption scheme.)

4) We perform experiments to determine how practical basis reduction performs on different bases of $\mathbb{Z}^n$. These experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of reduction algorithms (i.e., larger block sizes) and study the ``provably hard'' distribution of bases described above. We also observe a threshold phenomenon in which ``basis reduction algorithms on $\mathbb{Z}^n$ nearly always find a shortest non-zero vector once they have found a vector with length less than $\sqrt{n}/2$,'' and we explore this further.
Expand
Chen Chen, Xiao Liang, Bogdan Carbunar, Radu Sion
ePrint Report ePrint Report
Data privacy is critical in instilling trust and empowering the societal pacts of modern technology-driven democracies. Unfortunately, it is under continuous attack by overreaching or outright oppressive governments, including some of the world's oldest democracies. Increasingly-intrusive anti-encryption laws severely limit the ability of standard encryption to protect privacy. New defense mechanisms are needed. Plausible deniability (PD) is a powerful property, enabling users to hide the existence of sensitive information in a system under direct inspection by adversaries. Popular encrypted storage systems such as TrueCrypt and other research efforts have attempted to also provide plausible deniability. Unfortunately, these efforts have often operated under less well-defined assumptions and adversarial models. Careful analyses often uncover not only high overheads but also outright security compromise. Further, our understanding of adversaries, the underlying storage technologies, as well as the available plausible deniable solutions have evolved dramatically in the past two decades. The main goal of this work is to systematize this knowledge. It aims to: - identify key PD properties, requirements, and approaches; - present a direly-needed unified framework for evaluating security and performance; - explore the challenges arising from the critical interplay between PD and modern system layered stacks; - propose a new "trace-oriented" PD paradigm, able to decouple security guarantees from the underlying systems and thus ensure a higher level of flexibility and security independent of the technology stack. This work is meant also as a trusted guide for system and security practitioners around the major challenges in understanding, designing, and implementing plausible deniability into new or existing systems.
Expand
Damien Robissout, Lilian Bossuet, Amaury Habrard, Vincent Grosso
ePrint Report ePrint Report
The use of deep learning techniques to perform side-channel analysis attracted the attention of many researchers as they obtained good performances with them. Unfortunately, the understanding of the neural networks used to perform side-channel attacks is not very advanced yet. In this paper, we propose to contribute to this direction by studying the impact of some particular deep learning techniques for tackling side-channel attack problems. More precisely, we propose to focus on three existing techniques: batch normalization, dropout and weight decay, not yet used in side-channel context. By combining adequately these techniques for our problem, we show that it is possible to improve the attack performance, i.e. the number of traces needed to recover the secret, by more than 55%. Additionally, they allow us to have a gain of more than 34% in terms of training time. We also show that an architecture trained with such techniques is able to perform attacks efficiently even in the context of desynchronized traces.
Expand
Joachim Neu, Srivatsan Sridhar, Lei Yang, David Tse, Mohammad Alizadeh
ePrint Report ePrint Report
Satoshi Nakamoto's Proof-of-Work (PoW) longest chain (LC) protocol was a breakthrough for Internet-scale open-participation consensus. Many Proof-of-Stake (PoS) variants of Nakamoto's protocol such as Ouroboros or Snow White aim to preserve the advantages of LC by mimicking PoW LC closely, while mitigating downsides of PoW by using PoS for Sybil resistance. Previous works have proven these PoS LC protocols secure assuming all network messages are delivered within a bounded delay. However, this assumption is not compatible with PoS when considering bandwidth constraints in the underlying communication network. This is because PoS enables the adversary to reuse block production opportunities and spam the network with equivocating blocks, which is impossible in PoW. The bandwidth constraint necessitates that nodes choose carefully which blocks to spend their limited download budget on. We show that 'download along the longest header chain', a natural download rule for PoW LC, emulated by PoS variants, is insecure for PoS LC. Instead, we propose 'download towards the freshest block' and prove that PoS LC with this download rule is secure in bandwidth constrained networks. Our result can be viewed as a first step towards the co-design of consensus and network layer protocols.
Expand
Kamilla Nazirkhanova, Joachim Neu, David Tse
ePrint Report ePrint Report
The ability to verifiably retrieve transaction or state data stored off-chain is crucial to blockchain scaling techniques such as rollups or sharding. We formalize the problem and design a storage- and communication-efficient protocol using linear erasure-correcting codes and homomorphic vector commitments. Motivated by application requirements for rollups, our solution departs from earlier Verifiable Information Dispersal schemes in that we do not require comprehensive termination properties or retrievability from any but only from some known sufficiently large set of storage nodes. Compared to Data Availability Oracles, under no circumstance do we fall back to returning empty blocks. Distributing a file of 28.8 MB among 900 storage nodes (up to 300 of which may be adversarial) requires in total approx. 95 MB of communication and storage and approx. 30 seconds of cryptographic computation on a single-threaded consumer-grade laptop computer. Our solution requires no modification to on-chain contracts of Validium rollups such as StarkWare's StarkEx. Additionally, it provides privacy of the dispersed data against honest-but-curious storage nodes.
Expand

28 November 2021

Beersheba, Israel, 30 June - 1 July 2022
Event Calendar Event Calendar
Event date: 30 June to 1 July 2022
Submission deadline: 7 February 2022
Notification: 14 March 2022
Expand
Bristol, United Kingdom, 31 January - 4 February 2022
School School
Event date: 31 January to 4 February 2022
Expand

27 November 2021

31 January 2023
Event Calendar Event Calendar
Event date: 31 January 2023
Submission deadline: 30 April 2022
Notification: 31 July 2022
Expand
Virtual event, Anywhere on Earth, 10 December - 11 December 2021
Event Calendar Event Calendar
Event date: 10 December to 11 December 2021
Expand
Nagasaki, Japan, 30 May -
Event Calendar Event Calendar
Event date: 30 May to
Submission deadline: 8 January 2022
Notification: 22 February 2022
Expand
Santa Barbara, USA, 13 August - 18 August 2022
CRYPTO CRYPTO
Event date: 13 August to 18 August 2022
Expand
Indian Statistical Institute, Kolkata
Job Posting Job Posting
Indian Statistical Institute invites applications from duly qualified for full-time faculty positions at the level of Assistant Professors and Associate Professor, to be placed at the R. C. Bose Centre for Cryptology and Security of the Institute, in Kolkata. Candidates with a strong research background in Cryptology and Security (preferably in Cybersecurity or IoT). For details please visit https://www.isical.ac.in/sites/default/files/jobs/rcbccs_advt_2022.pdf

Closing date for applications:

Contact: rcbose@isical.ac.in

More information: https://www.isical.ac.in/sites/default/files/jobs/rcbccs_advt_2022.pdf

Expand
Ruhr-Universität Bochum
Job Posting Job Posting
The newly established Chair of Information Security at RUB has multiple open positions for PhD students and postdoctoral researchers in the area of system security, particularly (but not limited to) those specializing in:

- Blockchain security and privacy: we explore how to improve the security and privacy of cryptocurrencies and modern blockchain platforms while enhancing their performance and scalability.
- Platform security: we explore how to make use of hardware support to improve the security and privacy of platforms.
- ML security and privacy: we investigate how we can improve the security of machine learning algorithms and how to securely use machine learning to secure existing platforms.

Are you excited by opportunities to work in any of those topics? Do you have a solid background in blockchain technologies, machine learning techniques, or security/privacy concepts? Are you excited about building highly performant secure systems? If so, we'd like to hear from you. If you are interested in applying, please send an email to Prof. Dr. Karame (ghassan.karame@rub.de) with your current CV and a description of why you think you are a good fit.

Closing date for applications:

Contact: Prof. Dr. Ghassan Karame

Expand
◄ Previous Next ►