## IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

There is currently a problem with the jobs channel, and new jobs listings are not appearing here. Please see the jobs page.

#### 18 August 2019

###### Handan Kılınç Alper

ePrint Report
Ouroboros Praos is a proof of stake based blockchain protocol. One of its security
assumptions is parties are synchronized i.e., all of them knows when the protocol passes
a new state. However, it is not easy to have such a protocol in real life, especially in a
decentralized network. Therefore, we construct a new version of Ouroboros Praos by composing a new protocol called Relative Time protocol. We call the new version Ouroboros
Clepsydra. At the end of the relative time protocol, a party learns the approximate state
of the protocol based on the median of arrival times of messages sent by the other parties
and adjusts its local clock based on it. The relative time protocol does not add any new
computation to the other parties. They even do not realize that they are part of the relative time protocol. In order to prove Ouroboros Clepsydrain the Universally Composable
(UC) model, we define a general UC model to capture the notion of relative time. We
remove the synchronization assumption in Ouroboros Clepsydra and show that Ouroboros
Clepsydra is a secure proof of stake blockchain protocol in the UC model.

###### Eman Salem Alashwali, Pawel Szalachowski, Andrew Martin

ePrint Report
Experience shows that most researchers and developers tend to treat plain-domains (those that are not prefixed with “www” sub-domains, e.g. “example.com”) as synonyms for their equivalent www-domains (those that are prefixed with “www” sub-domains, e.g. “www.example.com”). In this paper, we analyse datasets of nearly two million plain-domains against their equivalent www-domains to answer the following question: Do plain-domains and their equivalent www-domains differ in TLS security configurations and certificates? If so, to what extent? Our results provide evidence of an interesting phenomenon: plain-domains and their equivalent www-domains differ in TLS security configurations and certificates in a non-trivial number of cases. Furthermore, www-domains tend to have stronger security configurations than their equivalent plain-domains. Interestingly, this phenomenon is more prevalent in the most-visited domains than in randomly-chosen domains. Further analysis of the top domains dataset shows that 53.35% of the plain-domains that show one or more weakness indicators (e.g. expired certificate) that are not shown in their equivalent www-domains perform HTTPS redirection from HTTPS plain-domains to their equivalent HTTPS www-domains. Additionally, 24.71% of these redirections contains plain-text HTTP intermediate URLs. In these cases, users see the final www-domains with strong TLS configurations and certificates, but in fact, the HTTPS request has passed through plain-domains that have less secure TLS configurations and certificates. Clearly, such a set-up introduces a weak link in the security of the overall interaction.

###### Nasrollah Pakniat

ePrint Report
Certificateless cryptography can be considered as an intermediate solution to overcome the issues in traditional public key infrastructure (PKI) and identity-based public key cryptography (ID-PKC). There exist a vast number of certificateless signature (CLS) schemes in the literature; however, most of them are not efficient enough to be utilized in limited resources environments such as Internet of things (IoT) or Healthcare Wireless Sensor Networks (HWSN). Recently, two lightweight CLS schemes have been proposed by Karati et al. and Kumar et al. to be employed in IoT and HWSNs, respectively. While both schemes are claimed to be existentially unforgeable, in this paper, we show that both these signatures can easily be forged. More specifically, it is shown that 1) in Karati et al.'s scheme, a type 1 adversary, considered in certificateless cryptography, can generate a valid partial private key corresponding to any user of its choice and as a consequence, it can forge any users' signature on any message of its choice, and 2) in Kumar et al.'s scheme, both types of adversaries which are considered in certificateless cryptography are able to forge any signer's signature on an arbitrary message.

###### Martin Albrecht, Melissa Chase, Hao Chen, Jintai Ding, Shafi Goldwasser, Sergey Gorbunov, Shai Halevi, Jeffrey Hoffstein, Kim Laine, Kristin Lauter, Satya Lokam, Daniele Micciancio, Dustin Moody, Trav

ePrint Report
Homomorphic Encryption is a breakthrough technology which can enable private cloud storage and computation solutions, and many applications have been described in the literature in the last few years. But before Homomorphic Encryption can be adopted in medical, health, and financial sectors to protect data and patient and consumer privacy, it will have to be standardized, most likely by multiple standardization bodies and government agencies. An important part of standardization is broad agreement on security levels for varying parameter sets. Although extensive research and benchmarking has been done in the research community to establish the foundations for this effort, it is hard to find all the information in one place, along with concrete parameter recommendations for applications and deployment.
This document is the first Homomorphic Encryption Standard (HES) approved by the Homomorphicencryption.org community in 2018. It captures the collective knowledge on the state of security of these schemes, specifies the schemes, and recommends a wide selection of parameters to be used for homomorphic encryption at various security levels. We describe known attacks and their estimated running times in order to make these security parameter recommendations.

###### Gaëtan Leurent, Ferdinand Sibleyras

ePrint Report
The iterated Even-Mansour construction is an elegant construction that idealizes block cipher designs such as the AES. In this work we focus on the simplest variant, the 2-round Even-Mansour construction with a single key. This is the most minimal construction that offers security beyond the birthday bound: there is a security proof up to $2^{2n/3}$ evaluations of the underlying permutations and encryption, and the best known attacks have a complexity of roughly $2^n/n$ operations.
We show that attacking this scheme with block size $n$ is related to the 3-XOR problem with element size $w = 2n$, an important algorithmic problem that has been studied since the nineties. In particular the 3-XOR problem is known to require at least $2^{w/3}$ queries, and the best known algorithms require around $2^{w/2} / w$ operations: this roughly matches the known bounds for the 2-round Even-Mansour scheme. Using this link we describe new attacks against the 2-round Even-Mansour scheme. In particular, we obtain the first algorithms where both the data and the memory complexity are significantly lower than $2^n$ .
From a practical standpoint, previous works with a data and/or memory complexity close to $2^n$ are unlikely to be more efficient than a simple brute-force search over the key. Our best algorithm requires just $\lambda n$ known plaintext/ciphertext pairs, for some constant $0 < \lambda < 1$, $2^n/\lambda n$ time, and $2^{\lambda n}$ memory. For instance, with $n = 64$ and $\lambda = 1/2$, the memory requirement is practical, and we gain a factor 32 over brute-force search. We also describe an algorithm with asymptotic complexity $O(2^n (\ln^2{n/n^2})$, improving the previous asymptotic complexity of $O(2^n/n)$, using a variant of the 3-SUM algorithm of Baran, Demaine, and Patrascu.

###### Sayandeep Saha, Debapriya Basu Roy, Arnab Bag, Sikhar Patranabis, Debdeep Mukhopadhyay

ePrint Report
Fault attacks (FA) are one of the potent practical threats
to modern cryptographic implementations. Over the years the FA tech-
niques have evolved, gradually moving towards the exploitation of device-
centric properties of the faults. In this paper, we exploit the fact that
activation and propagation of a fault through a given combinational cir-
cuit (i.e. observability of a fault) is data dependent. Next, we show that
this property of combinational circuits leads to powerful fault attacks
even for implementations having dedicated and provably secure protec-
tions against both power and fault-assisted vulnerabilities. The attacks
found in this work are applicable even if the fault injection is made at
the middle rounds of a block cipher, which are out of reach for most
of the other existing fault analysis strategies. Quite evidently, they also
work for a known plaintext scenario. Moreover, the middle round attacks
are entirely blind in the sense that no access to the ciphertexts (cor-
rect/faulty) or plaintexts are required. The adversary is only assumed
to have the power of repeating an unknown plaintext several times. Ex-
perimental validation over software implementations of PRESENT and
AES proves the efficacy of the proposed attacks.

###### Markku-Juhani O. Saarinen

ePrint Report
SNEIK is a family of lightweight cryptographic algorithms derived from a
single 512-bit permutation. The SNEIKEN authenticated encryption and
SNEIKHA hash algorithms are candidates in the ongoing NIST Lightweight
Cryptography standardization effort. The SNEIGEN ``entropy distribution
function'' was designed to speed up certain functions in post-quantum
and lattice-based public key algorithms.
We implement and evaluate SNEIK algorithms on popular 8-bit AVR and 32-bit
ARMv7-M (Cortex M3/M4) microcontrollers, and also describe an
implementation for the open-source RISC-V (RV32I) Instruction Set
Architecture (ISA). Our results demonstrate that SNEIK algorithms usually
outperform AES and SHA-2/3 on these lightweight targets while having a
naturally constant-time design and significantly smaller implementation
footprint. The RISC-V architecture is becoming increasingly popular for
custom embedded designs that integrate a CPU core with application-specific
hardware components.
We show that inclusion of two simple custom instructions into the RV32I
ISA yields a radical (more than five-fold) speedup of the SNEIK
permutation and derived algorithms on that target, allowing us to reach
12.4 cycles/byte SNEIKEN-128 authenticated encryption performance on
PQShield's ``Crimson Puppy'' RV32I-based SoC. Our performance measurements
are for realistic message sizes and have been made using real hardware.
We also offer implementation size metrics in terms of RAM, firmware size,
and additional FPGA logic for the custom instruction set extensions.

###### Kai Chen; Zhongrui Lin; Jian Wan; Chungen Xu.

ePrint Report
In cloud security, traditional searchable encryption (SE) requires high computation and communication overhead for dynamic search and update. The clever combination of machine learning (ML) and SE may be a new way to solve this problem. This paper proposes interpretable encrypted searchable neural networks (IESNN) to explore probabilistic query, balanced index tree construction and automatic weight update in an encrypted cloud environment. In IESNN, probabilistic learning is used to obtain search ranking for searchable index, and probabilistic query is performed based on ciphertext index, which reduces the computational complexity of query significantly. Compared to traditional SE, it is proposed that adversarial learning and automatic weight update in response to user's timely query of the latest data set without expensive communication overhead. The proposed IESNN performs better than the previous works, bringing the query complexity closer to $O(\log N)$ and introducing low overhead on computation and communication.

###### Mohsin Khan, Kaisa Nyberg

ePrint Report
The goal of this paper is to investigate the behavior of the ideal cipher under linear cryptanalysis. The motivation of this work is twofold. First, before a practical cipher can be distinguished from an ideal one, the cryptanalyst must have an accurate understanding of the statistical behavior of the ideal cipher. Secondly, this issue has been neglected both in old and in more recent studies, particularly when multiple linear approximations are being used simultaneously. Traditionally, the models have been based on the average behavior and simplified using other artificial assumptions such as independence of linear approximations. The new models given in this paper are realistic, accurate and easy to use. They are backed up by standard statistical tools such as Pearson's chi-squared test and finite population correction and shown to work well in small practical examples.

###### Daniele Antonioli, Nils Ole Tippenhauer, Kasper Rasmussen

ePrint Report
The specification of Bluetooth and Bluetooth Low Energy includes
dedicated encryption key negotiation protocols used by two parties to
agree on the entropy of encryption keys. In this work, we show that an
attacker can manipulate the entropy negotiation of Bluetooth and
Bluetooth Low Energy to drastically reduce the encryption key
space. We call our attack the Key Negotiation Of Bluetooth (KNOB)
attack.

In the case of Bluetooth, we demonstrate that the entropy can be reduced from 16 to 1 Byte. Such low entropy enables the attacker to easily brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages in real-time. For Bluetooth Low Energy, we show that the entropy can still be downgraded from 16 to 7 Bytes, which reduces the attacker's effort to brute force the keys.

We implement and evaluate the KNOB attack on 17 Bluetooth chips (e.g., Intel Broadcom, Apple, and Qualcomm) and 15 Bluetooth Low Energy devices (e.g., Lenovo, Garmin, Samsung, Xiaomi, and Fitbit). Our results demonstrate that all tested devices are vulnerable to the KNOB attack. We discuss legacy and non-legacy compliant countermeasures to neutralize or mitigate the KNOB attack.

In the case of Bluetooth, we demonstrate that the entropy can be reduced from 16 to 1 Byte. Such low entropy enables the attacker to easily brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages in real-time. For Bluetooth Low Energy, we show that the entropy can still be downgraded from 16 to 7 Bytes, which reduces the attacker's effort to brute force the keys.

We implement and evaluate the KNOB attack on 17 Bluetooth chips (e.g., Intel Broadcom, Apple, and Qualcomm) and 15 Bluetooth Low Energy devices (e.g., Lenovo, Garmin, Samsung, Xiaomi, and Fitbit). Our results demonstrate that all tested devices are vulnerable to the KNOB attack. We discuss legacy and non-legacy compliant countermeasures to neutralize or mitigate the KNOB attack.

###### Muhammad ElSheikh, Amr M. Youssef

ePrint Report
$\texttt{CRAFT}$ is a lightweight tweakable block cipher introduced in FSE 2019. One of the main design criteria of $\texttt{CRAFT}$ is the efficient protection of its implementations against differential fault analysis. While the authors of $\texttt{CRAFT}$ provide several cryptanalysis results in several attack models, they do not claim any security of $\texttt{CRAFT}$ against related-key differential attacks.
In this paper, we utilize the simple key schedule of $\texttt{CRAFT}$ to propose a systematic method for constructing several repeatable 2-round related-key differential characteristics with probability $2^{-2}$. We then employ one of these characteristics to mount a key recovery attack on full-round $\texttt{CRAFT}$ using $2^{30}$ queries to the encryption oracle and $2^{69}$ encryptions. Additionally, we manage to use 8 related-key differential distinguishers, with 8 related-key differences, in order to mount a key recovery attack on the full-round cipher with $2^{35.17}$ queries to the encryption oracle, $2^{32}$ encryptions and about $2^6$ 64-bit blocks of memory. Furthermore, we present another attack that recovers the whole master key with $2^{36.09}$ queries to the encryption oracle and only $11$ encryptions with $2^7$ blocks of memory using 16 related-key differential distinguishers.

###### Andre Esser, Alexander May

ePrint Report
We propose two polynomial memory collision finding algorithms for the low Hamming weight discrete logarithm problem in any abelian group $G$. The first one is a direct adaptation of the Becker-Coron-Joux (BCJ) algorithm for subset sum to the discrete logarithm setting. The second one significantly improves on this adaptation for all possible weights using a more involved application of the representation technique together with some new Markov chain analysis. In contrast to other low weight discrete logarithm algorithms, our second algorithm's time complexity interpolates to Pollard's $|G|^{\frac 1 2}$ bound for general discrete logarithm instances.

We also introduce a new subset sum algorithm with polynomial memory that improves on BCJ's $2^{0.72n}$ time bound for random subset sum instances $a_1, \ldots, a_n, t \in Z_{2^n}$. Technically, we introduce a novel nested collision finding for subset sum -- inspired by the NestedRho algorithm from Crypto '16 -- that recursively produces collisions. We first show how to instantiate our algorithm with run time $2^{0.649n}$. Using further tricks, we are then able to improve its complexity down to $2^{0.645n}$.

We also introduce a new subset sum algorithm with polynomial memory that improves on BCJ's $2^{0.72n}$ time bound for random subset sum instances $a_1, \ldots, a_n, t \in Z_{2^n}$. Technically, we introduce a novel nested collision finding for subset sum -- inspired by the NestedRho algorithm from Crypto '16 -- that recursively produces collisions. We first show how to instantiate our algorithm with run time $2^{0.649n}$. Using further tricks, we are then able to improve its complexity down to $2^{0.645n}$.

###### Yang Wang, Mingqiang Wang

ePrint Report
Till now, the only reduction from the module learning with errors problem (MLWE) to the ring learning with errors problem (RLWE) is given by Albrecht $et\ al.$ in ASIACRYPT $2017$. Reductions from search MLWE to search RLWE were satisfactory over power-of-$2$ cyclotomic fields with relative small increase of errors. However, a direct reduction from decision MLWE to decision RLWE leads to a super-polynomial increase of errors and does not work even in the most special cases-\ -power-of-$2$ cyclotomic fields. Whether we could reduce decision MLWE to decision RLWE and whether similar reductions could also work for general fields are still open. In this paper, we give a reduction from decision MLWE with module rank $d$ and computation modulus $q$ in worst-case to decision RLWE with modulus $q^d$ in average-case over any cyclotomic field. Our reduction increases the LWE error rate by a small polynomial factor. As a conclusion, we obtain an efficient reduction from decision MLWE with modulus $q\approx\tilde{O}(n^{5.75})$ and error rate $\alpha\approx\tilde{O}(n^{-4.25})$ in worst-case to decision RLWE with error rate $\Gamma\approx\tilde{O}(n^{-\frac{1}{2}})$ in average-case, hence, we get a reduction from worst-case module approximate shortest independent vectors problem (SIVP$_\gamma$) with approximation parameter $\gamma\approx\tilde{O}(n^{5})$ to corresponding average-case decision RLWE problems. Meanwhile, our result shows that the search variant reductions of Albrecht $et\ al.$ could work in arbitrary cyclotomic field as well. We also give an efficient self-reduction of RLWE problems and a converse reduction from decision MLWE to module SIVP$_{\gamma}$ over any cyclotomic field as improvements of relative results showed by Rosca $et\ al.$ in EUROCRYPT $2018$ and Langlois $et\ al.$ $[\rm{DCC}\ 15]$. Our methods can also be applied to more general algebraic fields $K$, as long as we can find a good enough basis of the dual $R^{\vee}$ of the ring of integers of $K$.

###### Samuel Dobson, Steven D. Galbraith

ePrint Report
Fujioka, Takashima, Terada and Yoneyama, in their 2018 work on an authenticated key exchange protocol using supersingular isogenies, use new assumptions in their security proof of the scheme. In particular, they define the degree-sensitive and degree-insensitive SI-GDH assumptions and problems. These assumptions include a decision oracle that is used in the security proofs. We give evidence that those assumptions are not well defined. Hence, the security proofs in their paper do not seem to be correct.

###### Fatemeh Ganji, Shahin Tajik, Domenic Forte, Jean-Pierre Seifert

ePrint Report
Electronic chips in consumer, industrial, and military applications are targeted by untrusted entities in the design and fabrication process, aiming at reverse-engineering and Intellectual Property (IP) piracy. Hardware obfuscation techniques, namely logic locking and IC camouflaging, have been developed to mislead an adversary hoping to reverse engineer the netlist and extract the functionality. However, virtually all existing hardware obfuscation schemes developed over the last decade have been shown to be vulnerable to oracle-guided attacks, e.g., SAT and machine learning attacks. Moreover, most of these schemes rely on an ideal tamper- and read-proof memory to store the key used to unlock the circuit after manufacturing. In this work, we propose two novel cryptographically-secure hardware obfuscation schemes using garbled circuits, which are compatible with current circuit synthesis and fabrication tools. Our first construction does not require any secure hardware with tamper- and read-proof memory. In this case, the security of the obfuscation is guaranteed by Proof-of-Stack blockchain protocols and witness encryption schemes. However, for our second construction, we assume the existence of secure memory in the hardware to achieve higher performance and less overhead. Both constructions are inspired by program obfuscation and one-time program techniques enabling us to selectively encrypt and garble some IP cores during integration as well as manufacturing to prevent IP piracy. Furthermore, with the help of our constructions, we can realize one-time and pay-per-use hardware, where a user can use the electronic circuit for a limited amount of time.

###### Lorenz Panny

ePrint Report
The Charles-Goren-Lauter hash function on isogeny graphs of supersingular elliptic curves was shown to be insecure under collision attacks when the endomorphism ring of the starting curve is known. Since there is no known way to generate a supersingular elliptic curve with verifiably unknown endomorphisms, the hash function can currently only be used after a trusted-setup phase. This note presents a simple modification to the construction of the hash function which, under a few heuristics, prevents said collision attack and permits the use of arbitrary starting curves, albeit with a performance impact of a factor of two.

###### Gilles Barthe, Sandrine Blazy, Benjamin Grégoire, Rémi Hutin, Vincent Laporte, David Pichardie, Alix Trieu

ePrint Report
Timing side-channels are arguably one of the main sources of
vulnerabilities in cryptographic implementations. One effective
mitigation against timing side-channels is to write programs that do
not perform secret-dependent branches and memory accesses. This
mitigation, known as ''cryptographic constant-time'', is
adopted by several popular cryptographic libraries.

This paper focuses on compilation of cryptographic constant-time programs, and more specifically on the following question: is the code generated by a realistic compiler for a constant-time source program itself provably constant-time? Surprisingly, we answer the question positively for a mildly modified version of the CompCert compiler, a formally verified and moderately optimizing compiler for C. Concretely, we modify the CompCert compiler to eliminate sources of potential leakage. Then, we instrument the operational semantics of CompCert intermediate languages so as to be able to capture cryptographic constant-time. Finally, we prove that the modified CompCert compiler preserves constant-time. Our mechanization maximizes reuse of the CompCert correctness proof, through the use of new proof techniques for proving preservation of constant-time. These techniques achieve complementary trade-offs between generality and tractability of proof effort, and are of independent interest.

This paper focuses on compilation of cryptographic constant-time programs, and more specifically on the following question: is the code generated by a realistic compiler for a constant-time source program itself provably constant-time? Surprisingly, we answer the question positively for a mildly modified version of the CompCert compiler, a formally verified and moderately optimizing compiler for C. Concretely, we modify the CompCert compiler to eliminate sources of potential leakage. Then, we instrument the operational semantics of CompCert intermediate languages so as to be able to capture cryptographic constant-time. Finally, we prove that the modified CompCert compiler preserves constant-time. Our mechanization maximizes reuse of the CompCert correctness proof, through the use of new proof techniques for proving preservation of constant-time. These techniques achieve complementary trade-offs between generality and tractability of proof effort, and are of independent interest.

###### Wulu Li, Lei Chen, Xin Lai, Xiao Zhang, Jiajun Xin

ePrint Report
Privacy protection has been extensively studied in the current blockchain research field. As representations, Monero and Zerocash have realized completely anonymous and amount-hiding transactions. However, nonregulation can lead to abuse of privacy, which brings about serious risks of breaking laws and committing crimes. Therefore, it is crucial to study the privacy-preserving blockchain systems with regulatory functions. In this paper, we discuss the regulatory model (regulator behavior, user behavior) on the privacy-preserving blockchains from application scenarios and finally select unconditional regulation, static regulation, and self-participation of users as the core principles, which is currently the closest approach to the "decentralization" of regulatable privacy-preserving blockchains. On the basis of the established regulatory model, we first propose a traceable and linkable ring signature scheme (TLRS) by use of classic ring signatures, one-time signatures and zero-knowledge proofs, which realizes the unforgeability, anonymity, linkability, nonslanderability and traceability of users' identities. Moreover, we first propose traceable Borromean range proof (TBoRP) and traceable Bulletproofs range proof (TBuRP) by use of Fiat-Shamir transform and DH assumptions, which realize the completeness, soundness, zero-knowledge and traceability of the transaction amounts. We also apply the newly designed schemes and other zero-knowledge proof protocols to achieve richer regulatory functions.

###### Dirk Achenbach, Roland Gröll, Timon Hackenjos, Alexander Koch, Bernhard Löwe, Jeremias Mechler, Jörn Müller-Quade, Jochen Rill

ePrint Report
EMV, also known as Chip and PIN, is the world-wide standard for card-based electronic payment. Its security wavers: over the past years, researchers have demonstrated various practical attacks, ranging from using stolen cards by disabling PIN verification to cloning cards by pre-computing transaction data. Most of these attacks rely on violating certain unjustified and not explicitly stated core assumptions upon which EMV is built, namely that the input device (e.g. the ATM) is trusted and all communication channels are non-interceptable. In addition, EMV lacks a comprehensive formal description of its security.
In this work we give a formal model for the security of electronic payment protocols in the Universal Composability (UC) framework. A particular challenge for electronic payment is that one participant of a transaction is a human who cannot perform cryptographic operations. Our goal is twofold. First, we want to enable a transition from the iterative engineering of such protocols to using cryptographic security models to argue about a protocol’s security. Second, we establish a more realistic adversarial model for payment protocols in the presence of insecure devices and channels.
We prove a set of necessary requirements for secure electronic payment with regards to our model. We then discuss the security of current payment protocols based on these results and find that most are insecure or require unrealistically strong assumptions. Finally, we give a simple payment protocol inspired by chipTAN and photoTAN and prove its security. Our model captures the security properties of electronic payment protocols with human interaction. We show how to use this to reason about necessary requirements for secure electronic payment and how to develop a protocol based on the resulting guidelines. We hope that this will facilitate the development of new protocols with well-understood security properties.

#### 13 August 2019

###### Bastian Richter, Alexander Wild, Amir Moradi

ePrint Report
In side-channel analysis attacks, on-die localized EM monitoring enable high bandwidth measurements of only a relevant part of the Integrated Circuit (IC). This can lead to improved attacks compared to cases where only power consumption is measured. Combined with profiled attacks which utilize a training phase to create precise models of the information leakage, the attacks can become even more powerful. In contrast, localized EM measurements can cause difficulties in applying the learned models as the probe should be identically positioned for both the training and the attack even when the setup was used otherwise in between. Even small differences in the probe position can lead to significant differences in the recorded signals.
In this paper we present an automated system to precisely and efficiently reposition the probe when performing repeated measurements. Based on the training IC, we train a machine learning system to return the position of the probe for a given measurement. By taking a small number of measurements on the IC under attack, we can then obtain the coordinates of the measurements and map it to correct the coordinate system. As the target for our practical analyses, we use an STM32L0 ARMM0+ microcontroller with integrated hardware AES.