International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

27 October 2021

Yiping Ma, Ke Zhong, Tal Rabin, Sebastian Angel
ePrint Report ePrint Report
Recent private information retrieval (PIR) schemes preprocess the database with a query-independent offline phase in order to achieve sublinear computation during a query-specific online phase. These offline/online protocols expand the set of applications that can profitably use PIR, but they make a critical assumption: that the database is immutable. In the presence of changes such as additions, deletions, or updates, existing schemes must preprocess the database from scratch, wasting prior effort. To address this, we introduce incremental preprocessing for offline/online PIR schemes, allowing the original preprocessing to continue to be used after database changes, while incurring an update cost proportional to the number of changes rather than the size of the database. We adapt two offline/online PIR schemes to use incremental preprocessing and show how it significantly improves the throughput and reduces the latency of applications where the database changes over time.
Expand
Anuj Dubey, Afzal Ahmad, Muhammad Adeel Pasha, Rosario Cammarota, Aydin Aysu
ePrint Report ePrint Report
Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is largely unexplored. There is a critical need to efficiently and securely transform those defenses from cryptography such as masking to ML frameworks. Existing works, however, revealed that a straightforward adaptation of such defenses either provides partial security or leads to high area overheads. To address those limitations, this work proposes a fundamentally new direction to construct neural networks that are inherently more compatible with masking. The key idea is to use modular arithmetic in neural networks and then efficiently realize masking, in either Boolean or arithmetic fashion, depending on the type of neural network layers. We demonstrate our approach on the edge-computing friendly binarized neural networks (BNN) and show how to modify the training and inference of such a network to work with modular arithmetic without sacrificing accuracy. We then design novel masking gadgets using Domain-Oriented Masking (DOM) to efficiently mask the unique operations of ML such as the activation function and the output layer classification, and we prove their security in the glitch-extended probing model. Finally, we implement fully masked neural networks on an FPGA, quantify that they can achieve a similar latency while reducing the FF and LUT costs over the state-of-the-art protected implementations by 34.2% and 42.6%, respectively, and demonstrate their first-order side-channel security with up to 1M traces.
Expand
Sebastian Angel, Andrew J. Blumberg, Eleftherios Ioannidis, Jess Woods
ePrint Report ePrint Report
This paper introduces Otti, a general-purpose compiler for SNARKs that provides language-level support for numerical optimization problems. Otti produces efficient arithmetizations of programs that contain optimization problems including linear programming (LP), semi-definite programming (SDP), and a broad class of stochastic gradient descent (SGD) instances. Numerical optimization is a fundamental algorithmic building block: applications include scheduling and resource allocation tasks, approximations to NP-hard problems, and training of neural networks. Otti takes as input arbitrary programs written in a subset of C that contain optimization problems specified via an easy-to-use API. Otti then automatically produces rank-1 constraints satisfiability (R1CS) instances that express a succinct transformation of those pro- grams whose correct execution implies the optimality of the solution to the original optimization problem. Our experimental evaluation on real numerical solver benchmarks used by commercial LP, SDP, and SGD solvers shows that Otti, instantiated with the Spartan proof system, can prove the optimality of solutions in as little as 300 ms---over 4 orders of magnitude faster than existing approaches.
Expand
ZhaoCun Zhou, DengGuo Feng, Bin Zhang
ePrint Report ePrint Report
Fast correlation attacks, pioneered by Meier and Staffelbach, is an important cryptanalysis tool for LFSR-based stream cipher, which exploits the correlation between the LFSR state and key stream and targets at recovering the initial state of LFSR via a decoding algorithm. In this paper, we develop a vectorial decoding algorithm for fast correlation attack, which is a natural generalization of original binary approach. Our approach benefits from the contributions of all correlations in a subspace. We propose two novel criterions to improve the iterative decoding algorithm. We also give some cryptographic properties of the new FCA which allows us to estimate the efficiency and complexity bounds. Furthermore, we apply this technique to well-analyzed stream cipher Grain-128a. Based on a hypothesis, an interesting result for its security bound is deduced from the perspective of iterative decoding. Our analysis reveals the potential vulnerability for LFSRs over generic linear group and also for nonlinear functions with high SEI multidimensional linear approximations such as Grain-128a.
Expand
Daniel Matyas Perendi , Prosanta Gope
ePrint Report ePrint Report
The infamous Enigma machine was believed to be unbreakable before 1932 simply because of its variable settings and incredible complexity. However, people realised that there is a known pattern in the German messages, which then significantly reduced the number of possible settings and made the code breaker's job easier. Modern cryptanalysis techniques provide a lot more powerful way to break the Enigma cipher using letter frequencies and a concept called index of coincidence. In turn, this technique only works well for the English language(using the characters of the English alphabet), but what if we encountered an Enigma machine designed for the Hungarian language, where the alphabet consists of more than 26 characters? Experiments on the Enigma cipher with different languages have not been done to date, hence in this article we show the language's impact on both the machine and the cipher. Not only the Hungarian, but in fact, any language using more characters than the English language could have a significant effect on the Enigma machine and its complexity if there existed one. By a broad comparative analysis, it is proven that the size of the alphabet has a significant impact on the complexity and therefore the cryptanalysis.
Expand
Arka Rai Choudhuri, Michele Ciampi, Vipul Goyal, Abhishek Jain, Rafail Ostrovsky
ePrint Report ePrint Report
Oblivious transfer (OT) is a foundational primitive within cryptography owing to its connection with secure computation. One of the oldest constructions of oblivious transfer was from certified trapdoor permutations (TDPs). However several decades later, we do not know if a similar construction can be obtained from TDPs in general.

In this work, we study the problem of constructing round optimal oblivious transfer from trapdoor permutations. In particular, we obtain the following new results (in the plain model) relying on TDPs in a black-box manner:

1) Three-round oblivious transfer protocol that guarantees indistinguishability-security against malicious senders (and semi-honest receivers). 2) Four-round oblivious transfer protocol secure against malicious adversaries with black-box simulation-based security. By combining our second result with an already known compiler we obtain the first round-optimal 2-party computation protocol that relies in a black-box way on TDPs. A key technical tool underlying our results is a new primitive we call dual witness encryption (DWE) that may be of independent interest.
Expand
Gustavo Banegas, Thomas Debris-Alazard, Milena Nedeljković, Benjamin Smith
ePrint Report ePrint Report
This work presents the first full implementation of Wave, a postquantum code-based signature scheme. We define Wavelet, a concrete Wave scheme at the 128-bit classical security level (or NIST postquantum security Level 1) equipped with a fast verification algorithm targeting embedded devices. Wavelet offers 930-byte signatures, with a public key of 3161 kB. We include implementation details using AVX instructions, and on ARM Cortex-M4, including a solution to deal with Wavelet’s large public keys, which do not fit in the SRAM of a typical embedded device. Our verification algorithm is ≈ 4.65× faster then the original, and verifies in 1 087 538 cycles using AVX instructions, or 13 172 ticks in an ARM Cortex-M4.
Expand
Chinmoy Biswas, Ratna Dutta
ePrint Report ePrint Report
We consider multi-key fully homomorphic encryption (multi-key FHE) which is the richest variant of fully homomorphic encryption (FHE) that allows complex computation on encrypted data under different keys. Since its introduction by Lopez-Alt, Tromer and Vaikuntanathan in 2012, numerous proposals have been presented yielding various improvements in security and efficiency. However, most of these multi-key FHE schemes encrypt a single-bit message. Constructing a multi-key FHE scheme encrypting multi-bit messages have been notoriously difficult without loosing efficiency for homomorphic evaluation and ciphertext extension under additional keys. In this work, we study multi-key FHE that can encrypt multi-bit messages. Motivated by the goals of improving the efficiency, we propose a new construction with non-interactive decryption and security against chosen-plaintext attack (IND-CPA) from the standard learning with errors (LWE) assumption. We consider a binary matrix as plaintext instead of a single-bit. Our approach supports efficient homomorphic matrix addition and multiplication. Another interesting feature is that our technique of extending a ciphertext under additional keys yields significant reduction in the computational overhead. More interestingly, when contrasted with the previous multi-key FHE schemes for multi-bit messages, our candidates exhibits favorable results in the length of the secret key, public key and ciphertext preserving non-interactive decryption.

Keywords: lattice based cryptosystem, multi-key fully homomorphic encryption, learning with errors, multi-bit messages
Expand
Yi Liu, Qi Wang, Siu-Ming Yiu
ePrint Report ePrint Report
Extended permutation (EP) is a generalized notion of the standard permutation. Unlike the one-to-one correspondence mapping of the standard permutation, EP allows to replicate or omit elements as many times as needed during the mapping. EP is useful in the area of secure multi-party computation (MPC), especially for the problem of private function evaluation (PFE). As a special class of MPC problems, PFE focuses on the scenario where a party holds a private circuit $C$ while all other parties hold their private inputs $x_1, \ldots, x_n$, respectively. The goal of PFE protocols is to securely compute the evaluation result $C(x_1, \ldots, x_n)$, while any other information beyond $C(x_1, \ldots, x_n)$ is hidden. EP here is introduced to describe the topological structure of the circuit $C$, and it is further used to support the evaluation of $C$ privately.

For an actively secure PFE protocol, it is crucial to guarantee that the private circuit provider cannot deviate from the protocol to learn more information. Hence, we need to ensure that the private circuit provider correctly performs an EP. This seeks the help of the so-called \emph{zero-knowledge argument of encrypted extended permutation} protocol. In this paper, we provide an improvement of this protocol. Our new protocol can be instantiated to be non-interactive while the previous protocol should be interactive. Meanwhile, compared with the previous protocol, our protocol is significantly (\eg more than $3.4\times$) faster, and the communication cost is only around $24\%$ of that of the previous one.
Expand
Long Meng, Liqun Chen
ePrint Report ePrint Report
Time-stamping services are used to prove that a data item existed at a given point in time. This proof is represented by a time stamp token that is created by a time-stamping authority. ISO/IEC 18014 specifies time-stamping services and requires them holding the following two properties: (1) The data being time-stamped is not disclosed to the time-stamping authority, hash values of the data are provided to the authority instead. (2) A time-stamp token can be renewed, as a result the validity duration of a time-stamp token is not restricted by the lifetimes of underlying algorithms or policies. In this paper, we review this standard and discover several issues: Due to inconsistent writing or information missing, a time-stamping service, following the standard specification, may not be able to achieve these designed properties. We provide a solution to each issue.
Expand

24 October 2021

New jersey Institute of Technology
Job Posting Job Posting
The Ying Wu College of Computing at the New Jersey Institute of Technology invites applications for a senior faculty member to serve as the Director of the Institute for Cybersecurity. Candidates must have a PhD in computer science or a related discipline with a demonstrated track record of scholarly accomplishments in security/cryptography commensurate with the appointment at the rank of Associate Professor or above (Full, Distinguished).

The successful candidate will hold a faculty appointment in the department of Computer Science and is expected to lead the creation of the Institute for Cybersecurity, which builds on top of existing research and educational strengths in the area of cybersecurity and will span multiple departments across NJIT. As the Director of the Institute for Cybersecurity, the successful candidate must attract funding and develop collaborative relationships with industry.

NJIT is designated a Carnegie R1 Research University, with $161M research expenditures in FY20. The Computer Science Department is ranked 77 nationally by csrankings.org, and has 29 tenured/tenure track faculty, with eight NSF CAREER awardees and one DARPA Young Investigator recipient, and a research expenditure of 12 Million dollars in FY20. The department has strong connections with local industry and works closely with many companies through student Capstone projects, internships, co-ops and joint R&D projects.

To formally apply for the position, please submit your application (including CV and Cover letter) to NJIT’s career site: https://njit.csod.com/ux/ats/careersite/1/home/requisition/3409?c=njit
You must also submit additional candidate materials online at https://academicjobsonline.org/ajo/jobs/19436
the additional candidate materials include a cover letter, CV, Research Statement, Teaching Statement, and the contact information for at least three references. Applications received by December 31, 2021 will receive full consideration. However, applications are reviewed until all the positions are filled. Contact address for inquiries: cs-faculty­-search@njit.edu.

Closing date for applications:

Contact: cs-faculty­-search@njit.edu

More information: https://njit.csod.com/ux/ats/careersite/1/home/requisition/3409?c=njit

Expand
New Jersey Institute of Technology
Job Posting Job Posting
The Computer Science Department at the New Jersey Institute of Technology (NJIT) invites applications for tenure-track faculty positions starting in Fall 2022. While our area of special interest is security/cryptography, exceptional candidates will be considered in any area of mainstream computer science. While we are interested in hiring at the rank of Assistant Professor, exceptional candidates at higher ranks will also be considered.

NJIT is designated a Carnegie R1 Research University, with $161M research expenditures in FY20. The Computer Science Department is ranked 77 nationally by csrankings.org, and has 29 tenured/tenure track faculty, with eight NSF CAREER awardees and one DARPA Young Investigator award, and a research expenditure of 12 Million dollars in FY20. The Computer Science Department enrolls approximately 1,900 students at all levels across eleven programs of study and takes part, alongside the Department of Informatics and the Department of Data Science, in the Ying Wu College of Computing. The College has an enrollment of more than 3,300 students in computing disciplines, and graduates more than 900 computing professionals every year; as such, it is the largest purveyor of computing talent in the tri­state (NY, NJ, CT) area.

To formally apply for the position, please submit your application (including CV and Cover letter) to NJIT’s career site: https://njit.csod.com/ux/ats/careersite/1/home/requisition/3343?c=njit
You must also submit additional candidate materials online at https://academicjobsonline.org/ajo/jobs/19180
The additional candidate materials include a cover letter, CV, Research Statement, Teaching Statement, and the contact information for at least three references.

Applications received by December 31, 2021 will receive full consideration. However, applications are reviewed until all the positions are filled. Contact address for inquiries: cs-faculty­-search@njit.edu.

Closing date for applications:

Contact: cs-faculty­-search@njit.edu

More information: https://njit.csod.com/ux/ats/careersite/1/home/requisition/3343?c=njit

Expand
5ire.org
Job Posting Job Posting

5ireChain is a fifth-generation blockchain that aims to bring a paradigm shift from a for-profit to a for-benefit economy. 5ire's mission is to accelerate the implementation of the United Nations 2030 Agenda for Sustainable Development.

“We’re building 5ireChain to eliminate intermediaries and bring all the impact makers onto a level playing field where they can use the shared language of the UN SDGs. We want businesses to act as a force for good and help move the world from a for-profit paradigm to a for-benefit paradigm, facilitating the transition from the fourth industrial revolution to the fifth industrial revolution and building for-benefit incentive and reward distribution mechanisms.

We are currently in a research phase, working with models and simulations. In the near future, we will start implementing the research. You will have the opportunity to participate in developing -and improving- the state of the art of blockchain technologies, as well as turning them into a reality. You’ll be working directly with the existing research and development team.

Areas of interest:

Complexity theory, approximation algorithms, algorithmic game theory, mechanism design, computational social choice, crypto-economics, and governance. Consensus protocols, finality gadgets, inter-operability across blockchains, zero-knowledge proofs.

Key Responsibilities:

Designing and analyzing incentive mechanisms (rewards, slashings, handling of reports) of decentralized protocols.

Primarily, ensuring that solutions are sound and diving deeper into their formal definition.

What will help you get there:

Familiarity with the application of formal method techniques. (Provable security, Security proofs … would be a plus.)

Publications in Consensus engines, system security, applied cryptography, distributed systems, or privacy are highly desirable.

Experience in multi-agent decision-making mechanisms such as committee elections, referenda, auctions, and general on-chain governance is not required but would be a significant advantage.

Closing date for applications:

Contact:

Zakaria Salek

zakaria@5ire.org

More information: https://dotjobs.net/jobs/716f807d-ffdf-4558-996e-21fbd50f6b5d_consensus-distributed-systems-researcher-architect

Expand
Daniel J. Bernstein, Tanja Lange
ePrint Report ePrint Report
Spherical models of lattices are standard tools in the study of lattice-based cryptography, except for variations in terminology and minor details. Spherical models are used to predict the lengths of short vectors in lattices and the effectiveness of reduction modulo those short vectors. These predictions are consistent with an asymptotic theorem by Gauss, theorems on short vectors in almost all lattices from the invariant distribution, and a variety of experiments in the literature.

$S$-unit attacks are a rapidly developing line of attacks against structured lattice problems. These include the quantum polynomial-time attacks that broke the cyclotomic case of Gentry's original STOC 2009 FHE system under minor assumptions, and newer attacks that have broken through various barriers previously claimed for this line of work.

$S$-unit attacks take advantage of auxiliary lattices, standard number-theoretic lattices called $S$-unit lattices. Spherical models have recently been applied to these auxiliary lattices to deduce core limits on the power of $S$-unit attacks.

This paper shows that these models underestimate the power of $S$-unit attacks: $S$-unit lattices, like the lattice $Z^d$, have much shorter vectors and reduce much more effectively than predicted by these models. The attacker can freely choose $S$ to make the gap as large as desired, breaking through the core limits previously asserted for $S$-unit attacks.
Expand
Omri Shmueli
ePrint Report ePrint Report
Quantum money is a main primitive in quantum cryptography, that enables a bank to distribute to parties in the network, called wallets, unclonable quantum banknotes that serve as a medium of exchange between wallets. While quantum money suggests a theoretical solution to some of the fundamental problems in currency systems, it still requires a strong model to be implemented; quantum computation and a quantum communication infrastructure. A central open question in this context is whether we can have a quantum money scheme that uses "minimal quantumness", namely, local quantum computation and only classical communication.

Public-key semi-quantum money (Radian and Sattath, AFT 2019) is a quantum money scheme where the algorithm of the bank is completely classical, and quantum banknotes are publicly verifiable on any quantum computer. In particular, such scheme relies on local quantum computation and only classical communication. The only known construction of public-key semi-quantum is based on quantum lightning (Zhandry, EUROCRYPT 2019), which is based on a computational assumption that is now known to be broken.

In this work, we construct public-key semi-quantum money, based on quantum-secure indistinguishability obfuscation and the sub-exponential hardness of the Learning With Errors problem. The technical centerpiece of our construction is a new 3-message protocol, where a classical computer can delegate to a quantum computer the generation of a quantum state that is both, unclonable and publicly verifiable.
Expand
Théodore Conrad-Frenkiel, Rémi Géraud-Stewart, David Naccache
ePrint Report ePrint Report
This paper utilizes the techniques used by Regev \cite{DBLP:journals/jacm/Regev09} and Lyubashevsky, Peikert \& Regev in the security reduction of LWE and its algebraic variants \cite{DBLP:conf/eurocrypt/LyubashevskyPR13} to exhibit a quantum reduction from the decryption of NTRU to leaking information about the secret key. Since this reduction requires decryption with the same key one wishes to attack, it renders NTRU vulnerable to the same type of attacks that affect the Rabin--Williams scheme \cite{DBLP:conf/eurocrypt/Bernstein08} -- albeit requiring a quantum decryption query.

A common practice thwarting such attacks consists in applying the Fujisaki-Okamoto (FO, \cite{DBLP:conf/pkc/FujisakiO99}) transformation before encrypting. However, not all NTRU protocols enforce this protection. In particular the DPKE version of NTRU \cite{DBLP:conf/eurocrypt/SaitoXY18} is susceptible to such an attack.
Expand
Andrea Caforio, Daniel Collins, Ognjen Glamocanin, Subhadeep Banik
ePrint Report ePrint Report
Threshold Implementations have become a popular generic technique to construct circuits resilient against power analysis attacks. In this paper, we look to devise efficient threshold circuits for the lightweight block cipher family SKINNY. The only threshold circuits for this family are those proposed by its designers who decomposed the 8-bit S-box into four quadratic S-boxes, and constructed a 3-share byte-serial threshold circuit that executes the substitution layer over four cycles. In particular, we revisit the algebraic structure of the S-box and prove that it is possible to decompose it into (a) three quadratic S-boxes and (b) two cubic S-boxes. Such decompositions allow us to construct threshold circuits that require three shares and executes each round function in three cycles instead of four, and similarly circuits that use four shares requiring two cycles per round. Our constructions significantly reduce latency and energy consumption per encryption operation. Notably, to validate our designs, we synthesize our circuits on standard CMOS cell libraries to evaluate performance, and we conduct leakage detection via statistical tests on power traces on FPGA platforms to assess security.
Expand
Yang Wang, Yanmin Zhao, Mingqiang Wang
ePrint Report ePrint Report
Proxy re-encryption (PRE) schemes, which nicely solve the problem of delegating decryption rights, enable a semi-trusted proxy to transform a ciphertext encrypted under one key into a ciphertext of the same message under another arbitrary key. For a long time, the semantic security of PREs is quite similar to that of public key encryption (PKE) schemes. Cohen first pointed out the insufficiency of the security under chosen-plaintext attacks (CPA) of PREs in PKC 2019, and proposed a {\it{strictly stronger}} security notion, named security under honest re-encryption attacks (HRA), of PREs. Surprisingly, a few PREs satisfy the stronger HRA security and almost all of them are paring-based till now. To the best of our knowledge, we present the first detailed construction of HRA secure PREs based on standard LWE problems with {\it{comparable small and polynomially-bounded}} parameters in this paper. Combing known reductions, the HRA security of our PREs could also be guaranteed by the worst-case basic lattice problem (e.g. SIVP$_{\gamma}$). Our single-hop PRE schemes could be easily extended to the multi-hop case (as long as the maximum hop $L=O(1)$). Meanwhile, our single-hop PRE schemes are also key-private, which means that the implicit identities of a re-encryption key will not be revealed even in the case of a proxy colluding with some corrupted users. Some discussions about key-privacy of multi-hop PREs are also proposed, which indicates that several constructions of multi-hop PREs do not satisfy their key-privacy definitions.
Expand
Matteo Campanelli, Bernardo David, Hamidreza Khoshakhlagh, Anders Konring, Jesper Buus Nielsen
ePrint Report ePrint Report
A number of recent works have constructed cryptographic protocols with flavors of adaptive security by having a randomly-chosen anonymous committee run at each round. Since most of these protocols are stateful, transferring secret states from past committees to future, but still unknown, committees is a crucial challenge. Previous works have tackled this problem with approaches tailor-made for their specific setting, which mostly rely on using a blockchain to orchestrate auxiliary committees that aid in state hand-over process. In this work, we look at this challenge as an important problem on its own and initiate the study of Encryption to the Future (EtF) as a cryptographic primitive. First, we define a notion of a non-interactive EtF scheme where time is determined with respect to an underlying blockchain and a lottery selects parties to receive a secret message at some point in the future. While this notion seems overly restrictive, we establish two important facts: 1. if used to encrypt towards parties selected in the ``far future'', EtF implies witness encryption for NP over a blockchain; 2. if used to encrypt only towards parties selected in the ``near future'', EtF is not only sufficient for transferring state among committees as required by previous works but also captures previous tailor-made solutions. Inspired by these results, we provide a novel construction of EtF based on witness encryption over commitments (cWE), which we instantiate from a number of standard assumptions via a construction based on generic cryptographic primitives. Finally, we show how to lift ``near future'' EtF to ``far future'' EtF with a protocol based on an auxiliary committee whose communication complexity is independent from the length of plaintext messages being sent to the future.
Expand
Jan-Pieter D'Anvers, Daniel Heinz, Peter Pessl, Michiel van Beirendonck, Ingrid Verbauwhede
ePrint Report ePrint Report
Checking the equality of two arrays is a crucial building block of the Fujisaki-Okamoto transformation, and as such it is used in several post-quantum key encapsulation mechanisms including Kyber and Saber. While this comparison operation is easy to perform in a black box setting, it is hard to efficiently protect against side-channel attacks. For instance, the hash-based method by Oder et al. is limited to first-order masking, a higher-order method by Bache et al. was shown to be flawed, and a very recent higher-order technique by Bos et al. suffers in runtime. In this paper, we first demonstrate that the hash-based approach, and likely many similar first-order techniques, succumb to a relatively simple side-channel collision attack. We can successfully recover a Kyber512 key using just 6000 traces. While this does not break the security claims, it does show the need for efficient higher-order methods. We then present a new higher-order masked comparison algorithm based on the (insecure) higher-order method of Bache et al. Our new method is 4.2x, resp. 7.5x, faster than the method of Bos et al. for a 2nd, resp. 3rd, -order masking on the ARM Cortex-M4, and unlike the method of Bache et al., the new technique takes ciphertext compression into account, We prove correctness, security, and masking security in detail and provide performance numbers for 2nd and 3rd-order implementations. Finally, we verify our the side-channel security of our implementation using the test vector leakage assessment (TVLA) methodology.
Expand
◄ Previous Next ►