International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

30 November 2021

École polytechnique fédérale de Lausanne (EPFL)
Job Posting Job Posting
EPFL invites applications for postdoctoral positions in the Theory Group. Applications will be reviewed by theory faculty (as listed on EPFL's theory website at https://theory.epfl.ch/). Faculty may reach out to applicants on a rolling basis, though we encourage inquiries to be made as soon as possible.

The application must include the following materials:
  • Cover letter (identify one or more faculty of interest and provide availability).
  • CV (including a ranked list of at least three writers for letters of reference).
  • Research statement (covering research interests, past research focus, and future research directions).
Applications must be submitted through the application system in the embedded link, and recommendation letters can be submitted by your writers to tcs-postdoc@groupes.epfl.ch.

EPFL is located in Lausanne (Switzerland) and ranks among the world’s top scientific universities. In French-speaking Lausanne, English is generally well spoken and is the main language at EPFL. Postdoctoral positions come with a competitive salary for 1 year (83'600 CHF with yearly increments), renewable up to a maximum of 4 years.

Closing date for applications:

Contact: tcs-postdoc@groupes.epfl.ch

More information: https://recruiting.epfl.ch/Vacancies/2123/Description/2

Expand
University of Stuttgart, Institute of Information Security
Job Posting Job Posting
The Institute of Information Security at University of Stuttgart offers

fully-funded Postdoc and PhD positions in formal verification.

Successful candidates are expected to carry out research on tool-supported formal verification methods for security-critical systems and security protocols in our new REPROSEC initiative (https://reprosec.org/). See, e.g., our work at ACM CCS 2021 and EuroS&P 2021 on DY*.

The positions are available immediately with an internationally competitive salary (German public salary scale TV-L E13 or TV-L E14, depending on the candidate's qualification, ranging from about 4.000 Euro to 6.200 Euro monthly gross salary). The employment periods are between one and six years, following the German Wissenschaftszeitvertragsgesetz (WissZeitVg).

The Institute of Information Security offers a creative international environment for top-level international research in Germany's high-tech region.

You should have a Master's degree or a Ph.D. (or should be very close to completion thereof) in Computer Science, Mathematics, Cyber Security, or a related field. We value excellent analytical skills and

  • solid knowledge of logic, proofs and/or formal verification techniques (Theorem Proving, Type Checking, etc.), and
  • solid programming experience.
Knowledge in cryptography/security is not required, but a plus. Knowledge of German is not required.

The deadline for applications is

December 12th, 2021.

Late applications will be considered until the positions are filled.

See https://www.sec.uni-stuttgart.de/institute/job-openings/ for the official job announcement and details of how to apply.

Closing date for applications:

Contact: Prof. Dr. Ralf Küsters Institute of Information Security University of Stuttgart, Germany

More information: https://www.sec.uni-stuttgart.de/institute/job-openings/

Expand

29 November 2021

Sebastian Paul, Patrik Scheible, Friedrich Wiemer
ePrint Report ePrint Report
The threat of a cryptographically relevant quantum computer contributes to an increasing interest in the field of post-quantum cryptography (PQC). Compared to existing research efforts regarding the integration of PQC into the Transport Layer Security (TLS) protocol, industrial communication protocols have so far been neglected. Since industrial cyber-physical systems (CPS) are typically deployed for decades, protection against such long-term threats is needed.

In this work, we propose two novel solutions for the integration of post-quantum (PQ) primitives (digital signatures and key establishment) into the industrial protocol Open Platform Communications Unified Architecture (OPC~UA): a hybrid solution combining conventional cryptography with PQC and a solution solely based on PQC. Both approaches provide mutual authentication between client and server and are realized with certificates fully compliant to the X.509 standard. We implement the two solutions and measure and evaluate their performance across three different security levels. All selected algorithms (Kyber, Dilithium, and Falcon) are candidates for standardization by the National Institute of Standards and Technology (NIST). We show that Falcon is a suitable option~-- especially~-- when using floating-point hardware provided by our ARM-based evaluation platform. Our proposed hybrid solution provides PQ security for early adopters but comes with additional performance and communication requirements. Our solution solely based on PQC shows superior performance across all evaluated security levels in terms of handshake duration compared to conventional OPC~UA but comes at the cost of increased handshake sizes.

In addition to our performance evaluation, we provide a proof of security in the symbolic model for our two PQC-based variants of OPC~UA. For this proof, we use the cryptographic protocol verifier ProVerif and formally verify confidentiality and authentication properties of our quantum-resistant variants.
Expand
Andrew Morgan, Rafael Pass
ePrint Report ePrint Report
We consider the feasibility of non-interactive secure two-party computation (NISC) in the plain model satisfying the notion of superpolynomial-time simulation (SPS). While stand-alone secure SPS-NISC protocols are known from standard assumptions (Badrinarayanan et al., Asiacrypt 2017), it has remained an open problem to construct a concurrently composable SPS-NISC. Prior to our work, the best protocols require 5 rounds (Garg et al., Eurocrypt 2017), or 3 simultaneous-message rounds (Badrinarayanan et al., TCC 2017).

In this work, we demonstrate the first concurrently composable SPS-NISC. Our construction assumes the existence of: - a non-interactive (weakly) CCA-secure commitment, - a stand-alone secure SPS-NISC with subexponential security, and satisfies the notion of "angel-based" UC security (i.e., UC with a superpolynomial-time helper) with perfect correctness.

We additionally demonstrate that both of the primitives we use (albeit only with polynomial security) are necessary for such concurrently composable SPS-NISC with perfect correctness. As such, our work identifies essentially necessary and sufficient primitives for concurrently composable SPS-NISC with perfect correctness in the plain model.
Expand
Orr Dunkelman, Nathan Keller, Eyal Ronen, Adi Shamir
ePrint Report ePrint Report
One of the most celebrated and useful cryptanalytic algorithms is Hellman's time/memory tradeoff (and its Rainbow Table variant), which can be used to invert random-looking functions on $N$ possible values with time and space complexities satisfying $TM^2=N^2$. In this paper we develop new upper bounds on their performance in the quantum setting. As a search problem, one can always apply to it the standard Grover's algorithm, but this algorithm does not benefit from the possible availability of a large memory in which one can store auxiliary advice obtained during a free preprocessing stage. In fact, at FOCS'20 it was rigorously shown that for memory size bounded by $M \leq O(\sqrt{N})$, even quantum advice cannot yield an attack which is better than Grover's algorithm.

Our main result complements this lower bound by showing that in the standard Quantum Accessible Classical Memory (QACM) model of computation, we can improve Hellman's tradeoff curve to $T^{4/3}M^2=N^2$. When we generalize the cryptanalytic problem to a time/memory/data tradeoff attack (in which one has to invert $f$ for at least one of $D$ given values), we get the generalized curve $T^{4/3}M^2D^2=N^2$. A typical point on this curve is $D=N^{0.2}$, $M=N^{0.6}$, and $T=N^{0.3}$, whose time is strictly lower than both Grover's algorithm (which requires $T=N^{0.4}$ in this generalized search variant) and the classical Hellman algorithm (which requires $T=N^{0.4}$ for these $D$ and $M$).
Expand
Shiyao Chen, Yanhong Fan, Ling Sun, Yong Fu, Haibo Zhou, Yongqing Li, Meiqin Wang, Weijia Wang, Chun Guo
ePrint Report ePrint Report
We revisit designing AND-RX block ciphers, that is, the designs assembled with the most fundamental binary operations---AND, Rotation and XOR operations and do not rely on existing units. Likely, the most popular representative is the NSA cipher \texttt{SIMON}, which remains one of the most efficient designs, but suffers from difficulty in security evaluation.

As our main contribution, we propose \texttt{SAND}, a new family of lightweight AND-RX block ciphers. To overcome the difficulty regarding security evaluation, \texttt{SAND} follows a novel design approach, the core idea of which is to restrain the AND-RX operations to be within nibbles. By this, \texttt{SAND} admits an equivalent representation based on a $4\times8$ \textit{synthetic S-box} ($SSb$). This enables the use of classical S-box-based security evaluation approaches. Consequently, for all versions of \texttt{SAND}, (a) we evaluated security bounds with respect to differential and linear attacks, and in both single-key and related-key scenarios; (b) we also evaluated security against impossible differential and zero-correlation linear attacks.

This better understanding of the security enables the use of a relatively simple key schedule, which makes the ASIC round-based hardware implementation of \texttt{SAND} to be one of the state-of-art Feistel lightweight ciphers. As to software performance, due to the natural bitslice structure, \texttt{SAND} reaches the same level of performance as \texttt{SIMON} and is among the most software-efficient block ciphers.
Expand
Kaiyi Zhang, Hongrui Cui, Yu Yu
ePrint Report ePrint Report
With the growing adoption of facial recognition worldwide as a popular authentication method, there is increasing concern about the invasion of personal privacy due to the lifetime irrevocability of facial features. In principle, {\it Fuzzy Extractors} enable biometric-based authentication while preserving the privacy of biometric templates. Nevertheless, to our best knowledge, most existing fuzzy extractors handle binary vectors with Hamming distance, and no explicit construction is known for facial recognition applications where $\ell_2$-distance of real vectors is considered. In this paper, we utilize the dense packing feature of certain lattices (e.g., $\rm E_8$ and Leech) to design a family of {\it lattice-based} fuzzy extractors that docks well with existing neural network-based biometric identification schemes. We instantiate and implement the generic construction and conduct experiments on publicly available datasets. Our result confirms the feasibility of facial template protection via fuzzy extractors.
Expand
Chitchanok Chuengsatiansup, Andrew Feutrill, Rui Qi Sim, Yuval Yarom
ePrint Report ePrint Report
The seminal work of Heninger and Shacham (Crypto 2009) demonstrated a method for reconstructing secret RSA keys from artial information of the key components. In this paper we further investigate this approach but apply it to a different context that appears in some side-channel attacks. We assume a fixed-window exponentiation algorithm that leaks the equivalence between digits, without leaking the value of the digits themselves.

We explain how to exploit the side-channel information with the Heninger-Shacham algorithm. To analyse the complexity of the approach, we model the attack as a Markov process and experimentally validate the accuracy of the model. Our model shows that the attack is feasible in the commonly used case where the window size is 5.
Expand
Marco Baldi, Alessandro Barenghi, Franco Chiaraluce, Gerardo Pelosi, Paolo Santini
ePrint Report ePrint Report
Quasi-Cyclic Moderate-Density Parity-Check (QC-MDPC) codes are receiving increasing attention for their advantages in the context of post-quantum asymmetric cryptography based on codes. However, a fundamentally open question concerns modeling the performance of their decoders in the region of a low decoding failure rate (DFR). We provide two approaches for bounding the performance of these decoders, and study their asymptotic behavior. We first consider the well-known Maximum Likelihood (ML) decoder, which achieves optimal performance and thus provides a lower bound on the performance of any sub-optimal decoder. We provide lower and upper bounds on the performance of ML decoding of QC-MDPC codes and show that the DFR of the ML decoder decays polynomially in the QC-MDPC code length when all other parameters are fixed. Secondly, we analyze some hard to decode error patterns for Bit-Flipping (BF) decoding algorithms, from which we derive some lower bounds on the DFR of BF decoders applied to QC-MDPC codes.
Expand
Raghvendra Rohit, Santanu Sarkar
ePrint Report ePrint Report
At ToSC 2021, Rohit \textit{et al.} presented the first distinguishing and key recovery attacks on 7 rounds Ascon without violating the designer's security claims of nonce-respecting setting and data limit of $2^{64}$ blocks per key. So far, these are the best attacks on 7 rounds Ascon. However, the distinguishers require (impractical) $2^{60}$ data while the data complexity of key recovery attacks exactly equals $2^{64}$. Whether there are any practical distinguishers and key recovery attacks (with data less than $2^{64}$) on 7 rounds Ascon is still an open problem.

In this work, we give positive answers to these questions by providing a comprehensive security analysis of Ascon in the weak key setting. Our first major result is the 7-round cube distinguishers with complexities $2^{46}$ and $2^{33}$ which work for $2^{82}$ and $2^{63}$ keys, respectively. Notably, we show that such weak keys exist for any choice (out of 64) of 46 and 33 specifically chosen nonce variables. In addition, we improve the data complexities of existing distinguishers for 5, 6 and 7 rounds by a factor of $2^{8}, 2^{16}$ and $2^{27}$, respectively. Our second contribution is a new theoretical framework for weak keys of Ascon which is solely based on the algebraic degree. Based on our construction, we identify $2^{127.99}$, $2^{127.97}$ and $2^{116.34}$ weak keys (out of $2^{128}$) for 5, 6 and 7 rounds, respectively. Next, we present two key recovery attacks on 7 rounds with different attack complexities. The best attack can recover the secret key with $2^{63}$ data, $2^{69}$ bits of memory and $2^{115.2}$ time. Our attacks are far from threatening the security of full 12 rounds Ascon, but we expect that they provide new insights into Ascon's security.
Expand
Sujoy Sinha Roy, Ahmet Can Mert, Aikata, Sunmin Kwon, Youngsam Shin, Donghoon Yoo
ePrint Report ePrint Report
Fully homomorphic encryption enables computation on encrypted data, and hence it has a great potential in privacy-preserving outsourcing of computations. In this paper, we present a complete instruction-set processor architecture ‘Medha’ for accelerating the cloud-side operations of an RNS variant of the HEAAN homomorphic encryption scheme. Medha has been designed following a modular hardware design approach to attain a fast computation time for computationally expensive homomorphic operations on encrypted data. At every level of the implementation hierarchy, we explore possibilities for parallel processing. Starting from hardware-friendly parallel algorithms for the basic building blocks, we gradually build heavily parallel RNS polynomial arithmetic units. Next, many of these parallel units are interconnected elegantly so that their interconnections require the minimum number of nets, therefore making the overall architecture placement-friendly on the implementation platform. As homomorphic encryption is computation- as well as data-centric, the speed of homomorphic evaluations depends greatly on the way the data variables are handled. For Medha, we take a memory-conservative design approach and get rid of any off-chip memory access during homomorphic evaluations.

Our instruction-set accelerator Medha is programmable and it supports all homomorphic evaluation routines of the leveled fully RNS-HEAAN scheme. For a reasonably large parameter with the polynomial ring dimension 214 and ciphertext coefficient modulus 438-bit (corresponding to 128-bit security), we implemented Medha in a Xilinx Alveo U250 card. Medha achieves the fastest computation latency to date and is almost 2.4× faster in latency and also somewhat smaller in area than a state-of-the-art reconfigurable hardware accelerator for the same parameter.
Expand
Clémence Chevignard, Rémi Géraud-Stewart, Antoine Houssais, David Naccache, Edmond de Roffignac
ePrint Report ePrint Report
Consider some user buying software or hardware from a provider. The provider claims to have subjected this product to a number of tests, ensuring that the system operates nominally. How can the user check this claim without running all the tests anew?

The problem is similar to checking a mathematical conjecture. Many authors report having checked a conjecture $C(x)=\mbox{True}$ for all $x$ in some large set or interval $U$. How can mathematicians challenge this claim without performing all the expensive computations again?

This article describes a non-interactive protocol in which the prover provides (a digest of) the computational trace resulting from processing $x$, for randomly chosen $x \in U$. With appropriate care, this information can be used by the verifier to determine how likely it is that the prover actually checked $C(x)$ over $U$.

Unlike ``traditional'' interactive proof and probabilistically-checkable proof systems, the protocol is not limited to restricted complexity classes, nor does it require an expensive transformation of programs being executed into circuits or ad-hoc languages. The flip side is that it is restricted to checking assertions that we dub ``\emph{refutation-precious}'': expected to always hold true, and such that the benefit resulting from reporting a counterexample far outweighs the cost of computing $C(x)$ over all of $U$.
Expand
Tasopoulos George, Jinhui Li, Apostolos P. Fournaris, Raymond K. Zhao, Amin Sakzad, Ron Steinfeld
ePrint Report ePrint Report
Transport Layer Security (TLS) constitutes one of the most widely used protocols for securing Internet communication and has found broad acceptance also in the Internet of Things (IoT) domain. As we progress towards a security environment resistant against quantum computer attacks, TLS needs to be transformed in order to support post-quantum cryptography schemes. However, post-quantum TLS is still not standardized and its overall performance, especially in resource constrained, IoT capable, embedded devices is not well understood. In this paper, we evaluate the time, memory and energy requirements of a post-quantum variant of TLS version 1.3 (PQ TLS 1.3), by integrating the pqm4 library implementations of NIST round 3 post-quantum algorithms Kyber, Saber, Dilithium and Falcon into the popular wolfSSL TLS 1.3 library. In particular, our experiments focus on low end, resource constrained embedded devices manifested in the ARM Cortex-M4 embedded platform NUCLEO-F439ZI (with hardware cryptographic accelerator) and NUCLEO-F429ZI (without hardware cryptographic accelerator) boards. These two boards only provide $180$ MHz clock rate, $2$ MB Flash Memory and $256$ KB SRAM. To the authors' knowledge this is the first thorough time delay, memory usage and energy consumption PQ TLS 1.3 evaluation using the NIST round 3 finalist algorithms for resource constrained embedded systems with and without cryptography hardware acceleration. The paper's results show that the post-quantum signatures Dilithium and Falcon and post-quantum KEMs Kyber and Saber perform in general well in TLS 1.3 on embedded devices in terms of both TLS handshake time and energy consumption. There is no significant difference between the TLS handshake time of Kyber and Saber; However, the handshake time with Falcon is much lower than that with Dilithium. In addition, hardware cryptographic accelerator for symmetric-key primitives improves the performances of TLS handshake time by about 6% on the client side and even by 19% on the server side, on high security levels.
Expand
Jipeng Zhang, Junhao Huang, Zhe Liu, Sujoy Sinha Roy
ePrint Report ePrint Report
Saber is a module-lattice-based key encapsulation scheme that has been selected as a finalist in the NIST Post-Quantum Cryptography Standardization Project. As Saber computes on considerably large matrices and vectors of polynomials, its efficient implementation on memory-constrained IoT devices is very challenging. In this paper, we present an implementation of Saber with a minor tweak to the original Saber protocol for achieving reduced memory consumption and better performance. We call this tweaked implementation `Saber+', and the difference compared to Saber is that we use different generation methods of public matrix \(\boldsymbol{A}\) and secret vector \(\boldsymbol{s}\) for memory optimization. Our highly optimized software implementation of Saber+ on a memory-constrained RISC-V platform achieves 48\% performance improvement compared with the best state-of-the-art memory-optimized implementation of original Saber. Specifically, we present various memory and performance optimizations for Saber+ on a memory-constrained RISC-V microcontroller, with merely 16KB of memory available. We utilize the Number Theoretic Transform (NTT) to speed up the polynomial multiplication in Saber+. For optimizing cycle counts and memory consumption during NTT, we carefully compare the efficiency of the complete and incomplete-NTTs, with platform-specific optimization. We implement 4-layers merging in the complete-NTT and 3-layers merging in the 6-layer incomplete-NTT. An improved on-the-fly generation strategy of the public matrix and secret vector in Saber+ results in low memory footprint. Furthermore, by combining different optimization strategies, various time-memory trade-offs are explored. Our software implementation for Saber+ on selected RISC-V core takes just 3,809K, 3,594K, and 3,193K clock cycles for key generation, encapsulation, and decapsulation, respectively, while consuming only 4.8KB of stack at most.
Expand
Ziaur Rahman, Xun Yi, Ibrahim Khalil, Andrei Kelarev
ePrint Report ePrint Report
The world has been experiencing a mind-blowing expansion of blockchain technology since it was first introduced as an emerging means of cryptocurrency called bitcoin. Currently, it has been regarded as a pervasive frame of reference across almost all research domains, ranging from virtual cash to agriculture or even supply-chain to the Internet of Things. The ability to have a self-administering register with legitimate immutability makes blockchain appealing for the Internet of Things (IoT). As billions of IoT devices are now online in distributed fashion, the huge challenges and questions require to addressed in pursuit of urgently needed solutions. The present paper has been motivated by the aim of facilitating such efforts. The contribution of this work is to figure out those trade-offs the IoT ecosystem usually encounters because of the wrong choice of blockchain technology. Unlike a survey or review, the critical findings of this paper target sorting out specific security challenges of blockchain-IoT Infrastructure. The contribution includes how to direct developers and researchers in this domain to pick out the unblemished combinations of Blockchain enabled IoT applications. In addition, the paper promises to bring a deep insight on Ethereum, Hyperledger blockchain and IOTA technology to show their limitations and prospects in terms of performance and scalability.
Expand
Ziaur Rahman andIbrahim Khalil, Mousumi Sumi
ePrint Report ePrint Report
Several efforts have been seen claiming the lightweight block ciphers as a necessarily suitable substitute in securing the Internet of Things. Currently, it has been able to envisage as a pervasive frame of reference almost all across the privacy preserving of smart and sensor-oriented appliances. Different approaches are likely to be inefficient, bringing desired degree of security considering the easiness and surely the process of simplicity but security. Strengthening the well-known symmetric key and block dependent algorithm using either chaos motivated logistic map or elliptic curve has shown a far-reaching potential to be a discretion in secure real-time communication. The popular feature of logistic maps, such as the un-foreseeability and randomness often expected to be used in dynamic key-propagation in sync with chaos and scheduling technique towards data integrity. As a bit alternation in keys, able to come up with oversize deviation, also would have consequence to leverage data confidentiality. Henceforth it may have proximity to time consumption, which may lead to a challenge to make sure instant data exchange between participating node entities. In consideration of delay latency required to both secure encryption and decryption, the proposed approach suggests a modification on the key-origination matrix along with S-box. It has plausibly been taken us to this point that the time required proportionate to the plain-text sent while the plain-text disproportionate to the probability happening a letter on the message made. In line with that the effort so far sought how apparent chaos escalates the desired key-initiation before message transmission.
Expand
Mariana Botelho da Gama, John Cartlidge, Antigoni Polychroniadou, Nigel P. Smart, Younes Talibi Alaoui
ePrint Report ePrint Report
We examine bucket-based and volume-based algorithms for privacy-preserving asset trading in a financial dark pool. Our bucket-based algorithm places orders in quantised buckets, whereas the volume-based algorithm allows any volume size but requires more complex validation mechanisms. In all cases, we conclude that these algorithms are highly efficient and offer a practical solution to the commercial problem of preserving privacy of order information in a dark pool trading venue.
Expand
Huck Bennett, Atul Ganju, Pura Peetathawatchai, Noah Stephens-Davidowitz
ePrint Report ePrint Report
We study the computational problem of finding a shortest non-zero vector in a rotation of $\mathbb{Z}^n$, which we call $\mathbb{Z}$SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for $\mathbb{Z}$SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain special cases. However, despite all of this work, the fastest known algorithm that is proven to solve $\mathbb{Z}$SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in $2^{n + o(n)}$ time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for $\mathbb{Z}$SVP and instead ask what else we can say about the problem. E.g, can we find any non-trivial speedup over the best known SVP algorithm? And, what consequences would follow if $\mathbb{Z}$SVP actually is hard? Our results are as follows.

1) We show that $\mathbb{Z}$SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce $\mathbb{Z}$SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of $\mathbb{Z}$SVP from SVP. As a consequence of this reduction, we obtain a $2^{0.802n}$-time algorithm for $\mathbb{Z}$SVP, i.e., a non-trivial speedup over the best known algorithm for SVP on general lattices.

2) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) $\mathbb{Z}$SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of $\mathbb{Z}^n$ from either a lattice with all non-zero vectors longer than $\sqrt{n/\log n}$ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of $\mathbb{Z}^n$. The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that ``$\mathbb{Z}^n$ has the largest smoothing parameter.''

3) We show a distribution of bases $B$ for rotations of $\mathbb{Z}^n$ such that, if $\mathbb{Z}$SVP is hard for any input basis, then $\mathbb{Z}$SVP is hard on input $B$. This gives a satisfying theoretical resolution to the problem of sampling hard bases for $\mathbb{Z}^n$, which was studied by Blanks and Miller (PQCrypto, 2021). This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices (ia.cr/2021/1332), and they also used this to analyze the security of a public-key encryption scheme.)

4) We perform experiments to determine how practical basis reduction performs on different bases of $\mathbb{Z}^n$. These experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of reduction algorithms (i.e., larger block sizes) and study the ``provably hard'' distribution of bases described above. We also observe a threshold phenomenon in which ``basis reduction algorithms on $\mathbb{Z}^n$ nearly always find a shortest non-zero vector once they have found a vector with length less than $\sqrt{n}/2$,'' and we explore this further.
Expand
Chen Chen, Xiao Liang, Bogdan Carbunar, Radu Sion
ePrint Report ePrint Report
Data privacy is critical in instilling trust and empowering the societal pacts of modern technology-driven democracies. Unfortunately, it is under continuous attack by overreaching or outright oppressive governments, including some of the world's oldest democracies. Increasingly-intrusive anti-encryption laws severely limit the ability of standard encryption to protect privacy. New defense mechanisms are needed. Plausible deniability (PD) is a powerful property, enabling users to hide the existence of sensitive information in a system under direct inspection by adversaries. Popular encrypted storage systems such as TrueCrypt and other research efforts have attempted to also provide plausible deniability. Unfortunately, these efforts have often operated under less well-defined assumptions and adversarial models. Careful analyses often uncover not only high overheads but also outright security compromise. Further, our understanding of adversaries, the underlying storage technologies, as well as the available plausible deniable solutions have evolved dramatically in the past two decades. The main goal of this work is to systematize this knowledge. It aims to: - identify key PD properties, requirements, and approaches; - present a direly-needed unified framework for evaluating security and performance; - explore the challenges arising from the critical interplay between PD and modern system layered stacks; - propose a new "trace-oriented" PD paradigm, able to decouple security guarantees from the underlying systems and thus ensure a higher level of flexibility and security independent of the technology stack. This work is meant also as a trusted guide for system and security practitioners around the major challenges in understanding, designing, and implementing plausible deniability into new or existing systems.
Expand
Damien Robissout, Lilian Bossuet, Amaury Habrard, Vincent Grosso
ePrint Report ePrint Report
The use of deep learning techniques to perform side-channel analysis attracted the attention of many researchers as they obtained good performances with them. Unfortunately, the understanding of the neural networks used to perform side-channel attacks is not very advanced yet. In this paper, we propose to contribute to this direction by studying the impact of some particular deep learning techniques for tackling side-channel attack problems. More precisely, we propose to focus on three existing techniques: batch normalization, dropout and weight decay, not yet used in side-channel context. By combining adequately these techniques for our problem, we show that it is possible to improve the attack performance, i.e. the number of traces needed to recover the secret, by more than 55%. Additionally, they allow us to have a gain of more than 34% in terms of training time. We also show that an architecture trained with such techniques is able to perform attacks efficiently even in the context of desynchronized traces.
Expand
◄ Previous Next ►