IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
21 June 2020
Tassos Dimitriou
Reputation systems constitute one of the few workable mechanisms for distributed applications in which users can be made accountable for their actions. By collecting user experiences in reputation profiles, participants are encouraged to interact more with well-behaving peers hence better online behavior is motivated.
In this work, we develop a privacy-preserving reputation scheme for collaborative systems such as P2P networks in which peers can represent themselves with different pseudonyms when interacting with others. All these pseudonyms, however, are bound to the same reputation token. This allows honest peers maintain their good record even when switching to a new pseudonym, while preventing malicious peers from making a fresh start. Apart from an initial offline setup phase which is only needed to ensure reputation soundness but not privacy, our system is truly decentralized. Using an append-only distributed ledger such as Bitcoins blockchain, we show how participants can make anonymous yet verifiable assertions about their own reputation. Thus the system maintains soundness, peer-pseudonym unlinkability as well as unlinkability among pseudonyms of the same peer. We formally prove these properties, we discuss ways to eliminate the setup phase and we evaluate the efficiency of the various operations.
In this work, we develop a privacy-preserving reputation scheme for collaborative systems such as P2P networks in which peers can represent themselves with different pseudonyms when interacting with others. All these pseudonyms, however, are bound to the same reputation token. This allows honest peers maintain their good record even when switching to a new pseudonym, while preventing malicious peers from making a fresh start. Apart from an initial offline setup phase which is only needed to ensure reputation soundness but not privacy, our system is truly decentralized. Using an append-only distributed ledger such as Bitcoins blockchain, we show how participants can make anonymous yet verifiable assertions about their own reputation. Thus the system maintains soundness, peer-pseudonym unlinkability as well as unlinkability among pseudonyms of the same peer. We formally prove these properties, we discuss ways to eliminate the setup phase and we evaluate the efficiency of the various operations.
Rémi Clarisse, Sylvain Duquesne, Olivier Sanders
Pairings are a powerful tool to build advanced cryptographic schemes. The most efficient way to instantiate a pairing scheme is through Pairing-Friendly Elliptic Curves.
Because a randomly picked elliptic curve will not support an efficient pairing (the embedding degree will usually be too large to make any computation practical), a pairing-friendly curve has to be carefully constructed. This has led to famous curves, e.g. Barreto-Naehrig curves.
However, the computation of the discrete logarithm problem on the finite-field side has received much interest and its complexity has recently decreased. Hence the need to propose new curves has emerged.
In this work, we give one new curve that is specifically tailored to be fast over the first pairing-group, which is well suited for several cryptographic schemes, such as group signatures and their variants (EPID, anonymous attestation, etc) or accumulators.
Because a randomly picked elliptic curve will not support an efficient pairing (the embedding degree will usually be too large to make any computation practical), a pairing-friendly curve has to be carefully constructed. This has led to famous curves, e.g. Barreto-Naehrig curves.
However, the computation of the discrete logarithm problem on the finite-field side has received much interest and its complexity has recently decreased. Hence the need to propose new curves has emerged.
In this work, we give one new curve that is specifically tailored to be fast over the first pairing-group, which is well suited for several cryptographic schemes, such as group signatures and their variants (EPID, anonymous attestation, etc) or accumulators.
Susan Hohenberger, Venkata Koppula, Brent Waters
We provide a construction of chosen ciphertext secure public-key encryption from (injective) trapdoor functions. Our construction is black box and assumes no special properties (e.g. ``lossy'', ``correlated product secure'') of the trapdoor function.
Srinath Setty, Sebastian Angel, Jonathan Lee
This article describes recent progress in realizing verifiable state machines, a primitive that enables untrusted services to provide cryptographic proofs that they operate correctly. Applications of this primitive range from proving the correct operation of distributed and concurrent cloud services to reducing blockchain transaction costs by leveraging inexpensive off-chain computation without trust.
Gabriel Zaid, Lilian Bossuet, Amaury Habrard, Alexandre Venelli
The use of deep learning in side-channel analysis has been more and more prominent recently. In particular, Convolution Neural Networks (CNN) are very efficient tools to extract the secret information from side-channel traces. Previous work regarding the use of CNN in side-channel has been mostly proposed through practical results. Zaid et al. have proposed a theoretical methodology in order to better understand the convolutional part of CNN and to understand how to construct an efficient CNN in the side-channel context [ZBHV19]. The proposal of Zaid et al. has been recently questioned by [WAGP20]. However this revisit is based on wrong assumptions and misinterpretations. Hence, many of the claims of [WAGP20] are unfounded regarding [ZBHV19]. In this paper, we clear out the potential misunderstandings brought by [WAGP20] and explain more thoroughly the contributions of [ZBHV19].
Shan Chen, Manuel Barbosa, Alexandra Boldyreva, Bogdan Warinschi
We carry out the first provable security analysis of the new FIDO2 protocols, the promising FIDO Alliances proposal for a standard for passwordless user authentication. Our analysis covers the core components of FIDO2: the new Client-to-Authenticator Protocol (CTAP2) and the Web Authentication (WebAuthn) specification.
Our analysis is modular. For CTAP2 and WebAuthn, in turn, we propose appropriate security models that aim to capture their intended security goals and use the models to analyze security. We identify a series of shortcomings and propose stronger protocols designed to withstand stronger yet realistic adversaries. Next, we prove the security guarantees FIDO2 provides based on the security of its components.
We expect that our models and provable security results will help clarify the security guarantees of the FIDO2 protocols. In addition, our proposed constructions should pave the way towards the design and deployment of more secure passwordless user authentication protocols.
Our analysis is modular. For CTAP2 and WebAuthn, in turn, we propose appropriate security models that aim to capture their intended security goals and use the models to analyze security. We identify a series of shortcomings and propose stronger protocols designed to withstand stronger yet realistic adversaries. Next, we prove the security guarantees FIDO2 provides based on the security of its components.
We expect that our models and provable security results will help clarify the security guarantees of the FIDO2 protocols. In addition, our proposed constructions should pave the way towards the design and deployment of more secure passwordless user authentication protocols.
Samuel Jaques, Hart Montgomery, Arnab Roy
Time-release cryptography requires problems that take a long time to solve and take just as long even with significant computational resources. While time-release cryptography originated with the seminal paper of Rivest, Shamir and Wagner ('96), it has gained special visibility recently due to new time-release primitives, like verifiable delay functions (VDFs) and sequential proofs of work, and their novel blockchain applications. In spite of this recent progress, security definitions remain inconsistent and fragile, and foundational treatment of these primitives is scarce. Relationships between the various time-release primitives are elusive, with few connections to standard cryptographic assumptions.
We systematically address these drawbacks. We define formal notions of sequential functions, the building blocks of time-release cryptography. The new definitions are robust against change of machine models, making them more amenable to complexity theoretic treatment. We demonstrate the equivalence of various types of sequential functions under standard cryptographic assumptions. The time-release primitives in the literature (such as those defined by Bitansky et al. (ITCS '16)) imply that these primitives exist, as well as the converse.
However, showing that a given construction is a sequential function is a hard circuit lower bound problem. To our knowledge, no results show that standard cryptographic assumptions imply any sequentiality. For example, repeated squaring over RSA groups is assumed to be sequential, but nothing connects this conjecture to standard hardness assumptions. To circumvent this, we construct a function that we prove is sequential if there exists any sequential function, without needing any specific knowledge of this hypothetical function. Our techniques use universal circuits and fully homomorphic encryption and generalize some of the elegant techniques of the recent work on lattice NIZKs (Canetti et al., STOC '19).
Using our reductions and sequential function constructions, we build VDFs and sequential proofs of work from fully homomorphic encryption, incremental verifiable computation, and the existence of a sequential function. Though our constructions are theoretical in nature and not competitive with existing techniques, they are built from much weaker assumptions than known constructions.
We systematically address these drawbacks. We define formal notions of sequential functions, the building blocks of time-release cryptography. The new definitions are robust against change of machine models, making them more amenable to complexity theoretic treatment. We demonstrate the equivalence of various types of sequential functions under standard cryptographic assumptions. The time-release primitives in the literature (such as those defined by Bitansky et al. (ITCS '16)) imply that these primitives exist, as well as the converse.
However, showing that a given construction is a sequential function is a hard circuit lower bound problem. To our knowledge, no results show that standard cryptographic assumptions imply any sequentiality. For example, repeated squaring over RSA groups is assumed to be sequential, but nothing connects this conjecture to standard hardness assumptions. To circumvent this, we construct a function that we prove is sequential if there exists any sequential function, without needing any specific knowledge of this hypothetical function. Our techniques use universal circuits and fully homomorphic encryption and generalize some of the elegant techniques of the recent work on lattice NIZKs (Canetti et al., STOC '19).
Using our reductions and sequential function constructions, we build VDFs and sequential proofs of work from fully homomorphic encryption, incremental verifiable computation, and the existence of a sequential function. Though our constructions are theoretical in nature and not competitive with existing techniques, they are built from much weaker assumptions than known constructions.
Arka Rai Choudhuri, Aarushi Goel, Matthew Green, Abhishek Jain, Gabriel Kaptchuk
Existing approaches to secure multiparty computation (MPC) require all the participants to commit to the entire duration of the protocol. As interest in MPC continues to grow, it is inevitable that there will be a desire to use it to evaluate increasingly complex functionalities on massive datasets, resulting in computations spanning several hours or days. Such scenarios call for a dynamic participation model for MPC where participants have the flexibility to go offline as needed and (re)join when they have available computational resources. Such a model would also democratize access to privacy-preserving computation by facilitating an ``MPC-as-a-service'' paradigm --- the deployment of MPC in volunteer-operated networks that perform computation on behalf of clients.
In this work, we initiate the study of ``fluid MPC'', where parties can dynamically join and leave the computation. The minimum commitment required from each participant is referred to as ``fluidity'', measured in the number of rounds of communication that it must stay online. Our contributions are threefold:
1) We provide a formal treatment of fluid MPC, exploring various possible modeling choices.
2) We construct information-theoretic fluid MPC protocols in the honest-majority setting. Our protocols achieve ``maximal fluidity'', meaning that a party can exit the computation after receiving and sending messages in one round.
3) We implement our protocol and test it in multiple network settings.
In this work, we initiate the study of ``fluid MPC'', where parties can dynamically join and leave the computation. The minimum commitment required from each participant is referred to as ``fluidity'', measured in the number of rounds of communication that it must stay online. Our contributions are threefold:
1) We provide a formal treatment of fluid MPC, exploring various possible modeling choices.
2) We construct information-theoretic fluid MPC protocols in the honest-majority setting. Our protocols achieve ``maximal fluidity'', meaning that a party can exit the computation after receiving and sending messages in one round.
3) We implement our protocol and test it in multiple network settings.
Thomas Attema, Ronald Cramer, Serge Fehr
In a zero-knowledge (ZK) proof of partial knowledge, introduced by Cramer, Damgard and Schoenmakers (CRYPTO 1994), a prover claiming knowledge of witnesses for some $k$-subset of $n$ given public statements can convince the verifier without revealing which $k$-subset.
The accompanying dedicated solution based on secret sharing achieves linear communication complexity for general $k,n$ and for many natural classes of statements.
Especially the case $k=1$ and $n=2$ (``one-out-of-two'') has seen myriad applications during the last decades, e.g., in electronic voting, ring signatures, and confidential transactions systems in general.
In this paper we focus on the discrete logarithms (DL) setting; the prover's claim pertains to knowledge of discrete logarithms of $k$-out-of-$n$ given elements from a group supporting DL-based cryptography. Groth and Kohlweiss (EUROCRYPT 2015) have shown how to solve the special case $k=1$ and general $n$ with logarithmic communication instead of linear. However, their method, which is original, takes explicit advantage of $k=1$ and does not generalize to $k>1$ without losing all advantage over prior work.
Our contributions are as follows. We show a solution with logarithmic communication for general $k,n$ instead of just $k=1$ and general $n$ from prior work. Applying the Fiat-Shamir transform renders a non-interactive logarithmic-size zero-knowledge proof. Our approach deploys a novel twist on a basic primitive from Compressed $\Sigma$-Protocol Theory (CRYPTO 2020) that we then utilize to compress a carefully chosen adaptation of the CRYPTO 1994 approach down to logarithmic size. Interestingly, even for $k=1$ and general $n$ our approach improves prior work as it reduces communication up to almost a factor $1/2$.
We also generalize this to proofs of partial knowledge about compact commitments of long vectors. Optionally, the prover may at the same time demonstrate his secret to satisfy some arbitrary given constraint. Finally, we also generalize from threshold to arbitrary access structures.
In this paper we focus on the discrete logarithms (DL) setting; the prover's claim pertains to knowledge of discrete logarithms of $k$-out-of-$n$ given elements from a group supporting DL-based cryptography. Groth and Kohlweiss (EUROCRYPT 2015) have shown how to solve the special case $k=1$ and general $n$ with logarithmic communication instead of linear. However, their method, which is original, takes explicit advantage of $k=1$ and does not generalize to $k>1$ without losing all advantage over prior work.
Our contributions are as follows. We show a solution with logarithmic communication for general $k,n$ instead of just $k=1$ and general $n$ from prior work. Applying the Fiat-Shamir transform renders a non-interactive logarithmic-size zero-knowledge proof. Our approach deploys a novel twist on a basic primitive from Compressed $\Sigma$-Protocol Theory (CRYPTO 2020) that we then utilize to compress a carefully chosen adaptation of the CRYPTO 1994 approach down to logarithmic size. Interestingly, even for $k=1$ and general $n$ our approach improves prior work as it reduces communication up to almost a factor $1/2$.
We also generalize this to proofs of partial knowledge about compact commitments of long vectors. Optionally, the prover may at the same time demonstrate his secret to satisfy some arbitrary given constraint. Finally, we also generalize from threshold to arbitrary access structures.
Joël Alwen, Sandro Coretti, Daniel Jost, Marta Mularczyk
A continuous group key agreement (CGKA) protocol allows a long-lived group of parties to agree on a continuous stream of fresh secret key material. The protocol must support constantly changing group membership, make no assumptions about when, if, or for how long members come online, nor rely on any trusted group managers. Due to sessions' long life-time, CGKA protocols must simultaneously ensure both post-compromise security and forward secrecy (PCFS). That is, current key material should be secure despite both past and future compromises.
The work of Alwen et al. (CRYPTO'20), introduced the CGKA primitive and identified it as a crucial component for constructing end-to-end secure group messaging protocols (SGM) (though we believe there are certainly more applications given the fundamental nature of key agreement). The authors analyzed the TreeKEM CGKA, which lies at the heart of the SGM protocol under development by the IETF working group on Messaging Layer Security (MLS).
In this work, we continue the study of CGKA as a stand-alone cryptographic primitive. We present $3$ new security notions with increasingly powerful adversaries. Even the weakest of the 3 (passive security) already permits attacks to which all prior constructions (including all variants of TreeKEM) are vulnerable.
Going further, the 2 stronger (active security) notions additionally allow the adversary to use parties' exposed states (and full network control) to mount attacks. These are closely related to so-called insider attacks, which involve malicious group members actively deviating from the protocol. Insider attacks present a significant challenge in the study of CGKA (and SGM). Indeed, we believe ours to be the first security notions (and constructions) to formulate meaningful guarantees (e.g. PCFS) against such powerful adversaries. They are also the first composable security notions for CGKA of any type at all.
In terms of constructions, for each of the 3 security notions we provide a new CGKA scheme enjoying sub-linear (potentially even logarithmic) communication complexity in the number of group members. We prove each scheme optimally secure, in the sense that the only security violations possible are those necessarily implied by correctness.
The work of Alwen et al. (CRYPTO'20), introduced the CGKA primitive and identified it as a crucial component for constructing end-to-end secure group messaging protocols (SGM) (though we believe there are certainly more applications given the fundamental nature of key agreement). The authors analyzed the TreeKEM CGKA, which lies at the heart of the SGM protocol under development by the IETF working group on Messaging Layer Security (MLS).
In this work, we continue the study of CGKA as a stand-alone cryptographic primitive. We present $3$ new security notions with increasingly powerful adversaries. Even the weakest of the 3 (passive security) already permits attacks to which all prior constructions (including all variants of TreeKEM) are vulnerable.
Going further, the 2 stronger (active security) notions additionally allow the adversary to use parties' exposed states (and full network control) to mount attacks. These are closely related to so-called insider attacks, which involve malicious group members actively deviating from the protocol. Insider attacks present a significant challenge in the study of CGKA (and SGM). Indeed, we believe ours to be the first security notions (and constructions) to formulate meaningful guarantees (e.g. PCFS) against such powerful adversaries. They are also the first composable security notions for CGKA of any type at all.
In terms of constructions, for each of the 3 security notions we provide a new CGKA scheme enjoying sub-linear (potentially even logarithmic) communication complexity in the number of group members. We prove each scheme optimally secure, in the sense that the only security violations possible are those necessarily implied by correctness.
Nils Albartus, Max Hoffmann, Sebastian Temme, Leonid Azriel, Christof Paar
Reverse engineering of integrated circuits, i.e., understanding the internals of ICs, is required for many benign and malicious applications.
Examples of the former are detection of patent infringements, hardware Trojans or IP theft, as well as interface recovery and defect analysis, while malicious applications include IP theft and finding insertion points for hardware Trojans.
However, regardless of the application, the reverse engineer initially starts with a large unstructured netlist, forming an incomprehensible sea of gates.
This work presents DANA, a generic, technology-agnostic, and fully automated dataflow analysis methodology for flattened gate-level netlists. By analyzing the flow of data between individual FFs, DANA recovers high-level registers. The key idea behind DANA is to combine independent metrics based on structural and control information with a powerful automated architecture. Notably, DANA works without any thresholds, scenario-dependent parameters, or other "magic" values that the user must choose. We evaluate DANA on nine modern hardware designs, ranging from cryptographic co-processors, over CPUs, to the OpenTitan, a state-of-the-art SOC, which is maintained by the lowRISC initiative with supporting industry partners like Google and Western Digital. Our results demonstrate almost perfect recovery of registers for all case studies, regardless whether they were synthesized as FPGA or ASIC netlists. Furthermore, we explore two applications for dataflow analysis: we show that the raw output of DANA often already allows to identify crucial components and high-level architecture features and also demonstrate its applicability for detecting simple hardware Trojans.
Hence, DANA can be applied universally as the first step when investigating unknown netlists and provides major guidance for human analysts by structuring and condensing the otherwise incomprehensible sea of gates. The implementation of DANA, synthesized netlists, and detailed results will be released as open source.
This work presents DANA, a generic, technology-agnostic, and fully automated dataflow analysis methodology for flattened gate-level netlists. By analyzing the flow of data between individual FFs, DANA recovers high-level registers. The key idea behind DANA is to combine independent metrics based on structural and control information with a powerful automated architecture. Notably, DANA works without any thresholds, scenario-dependent parameters, or other "magic" values that the user must choose. We evaluate DANA on nine modern hardware designs, ranging from cryptographic co-processors, over CPUs, to the OpenTitan, a state-of-the-art SOC, which is maintained by the lowRISC initiative with supporting industry partners like Google and Western Digital. Our results demonstrate almost perfect recovery of registers for all case studies, regardless whether they were synthesized as FPGA or ASIC netlists. Furthermore, we explore two applications for dataflow analysis: we show that the raw output of DANA often already allows to identify crucial components and high-level architecture features and also demonstrate its applicability for detecting simple hardware Trojans.
Hence, DANA can be applied universally as the first step when investigating unknown netlists and provides major guidance for human analysts by structuring and condensing the otherwise incomprehensible sea of gates. The implementation of DANA, synthesized netlists, and detailed results will be released as open source.
Max Hoffmann, Christof Paar
Hardware obfuscation is widely used in practice to counteract reverse engineering.
In recent years, low-level obfuscation via camouflaged gates has been increasingly discussed in the scientific community and industry.
In contrast to classical high-level obfuscation, such gates result in recovery of an erroneous netlist.
This technology has so far been regarded as a purely defensive tool.
We show that low-level obfuscation is in fact a double-edged sword that can also enable stealthy malicious functionalities.
In this work, we present Doppelganger, the first generic design-level obfuscation technique that is based on low-level camouflaging. Doppelganger obstructs central control modules of digital designs, e.g., FSMs or bus controllers, resulting in two different design functionalities: an apparent one that is recovered during reverse engineering and the actual one that is executed during operation. Notably, both functionalities are under the designer's control.
In two case studies, we apply Doppelganger to a universal cryptographic coprocessor. First, we show the defensive capabilities by presenting the reverse engineer with a different mode of operation than the one that is actually executed at an area overhead of less than 0.6%. Then, for the first time, we demonstrate the considerable threat potential through low-level obfuscation. We show how an invisible, remotely exploitable key-leakage Trojan can be injected into the same cryptographic coprocessor just through obfuscation at an area overhead of mere 0.01%.
In this work, we present Doppelganger, the first generic design-level obfuscation technique that is based on low-level camouflaging. Doppelganger obstructs central control modules of digital designs, e.g., FSMs or bus controllers, resulting in two different design functionalities: an apparent one that is recovered during reverse engineering and the actual one that is executed during operation. Notably, both functionalities are under the designer's control.
In two case studies, we apply Doppelganger to a universal cryptographic coprocessor. First, we show the defensive capabilities by presenting the reverse engineer with a different mode of operation than the one that is actually executed at an area overhead of less than 0.6%. Then, for the first time, we demonstrate the considerable threat potential through low-level obfuscation. We show how an invisible, remotely exploitable key-leakage Trojan can be injected into the same cryptographic coprocessor just through obfuscation at an area overhead of mere 0.01%.
Hyung Tae Lee, San Ling, Jae Hong Seo, Huaxiong Wang
Recently, Huang et al. proposed a concept of public key encryption with filtered equality test (PKE-FET) that allows a tester who has a warrant for the selected message set to check equality of messages in ciphertexts that belong to that set (Journal of Computer and System Sciences, 2017). They also presented an instantiation of the PKE-FET that was asserted to achieve the indistinguishability against adaptive chosen ciphertext attacks (IND-CCA2) in the standard model. In this note, we show that Huang et al.s instantiation does not achieve the IND-CCA2 security by presenting a simple adaptive chosen ciphertext attack.
Tatsuo Mitani, Akira Otsuka
Privacy protection and scalability are significant issues with blockchain. We propose an anonymous probabilistic payment under the general functionality for solving them. We consider the situation that several payers pay several payees through a tumbler. We have mediated the tumbler of the payment channel hub between payers and payees. Unlinkability means that the link, which payer pays which payee via the tumbler, is broken. A cryptographic puzzle plays a role in controlling the intermediation and execution of transactions. Masking the puzzle enables the payer and the payee to unlink their payments. The overview of the proposed protocol is similar to TumbleBit (NDSS 2017). We confirm the protocol realizes the ideal functionalities discussed in TumbleBit. The functionality required for our proposal is the hashed time lock contract that various cryptocurrencies use. This request is general, not restricted to any particular cryptocurrency. Our proposal includes a probabilistic payment. In probabilistic payment, one pays an ordinary mount with a certain probability. One pays a small amount as an expected value. One can run fewer transactions than a deterministic payment. It contributes scalability. We introduce a novel fractional oblivious transfer for probabilistic payment. We call it the ring fractional oblivious transfer (RFOT). RFOT is based on the ring learning with errors (RLWE) encryption. Our trick is based on the fact that an element of the ring is indistinguishable from the circular shifted element. We confirm that RFOT holds the properties of fractional hiding and binding presented in the DAM scheme (Eurocrypt 2017).
Karim Eldefrawy, Seoyeon Hwang, Rafail Ostrovsky, Moti Yung
In modern distributed systems, an adversarys limitations when corrupting subsets of servers
may not necessarily be based on threshold constraints, but rather based on other technical or organizational
characteristics in the systems. For example, it can be based on the operating systems they run, the cost
of corrupting insiders in a sub-organization, et cetera. This means that the corruption patterns (and
thus protection guarantees) are not based on the adversary being limited by a threshold, but on the
adversary being limited by other constraints, in particular by what is known as a General Adversary
Structure (GAS). GAS settings may come up in situations like large enterprises, computing and networking
infrastructure of Internet Service Providers, data centers and cloud infrastructure, IT infrastructure of
government agencies, computerized military systems, and critical infrastructure. We consider efficient secure
multiparty computation (MPC) under such dynamically-changing GAS settings. During these changes,
one desires to protect against and during corruption profile change, which renders some (secret sharing-based)
encoding schemes underlying the MPC protocol more efficient than others when operating with the
(currently) considered GAS.
One of our contributions is a set of novel protocols to efficiently and securely convert back and forth between
different MPC schemes for GAS; this process is often called share conversion. Specifically, we consider two
MPC schemes, one based on additive secret sharing and the other based on Monotone Span Programs
(MSP). The ability to efficiently convert between the secret sharing representations of these MPC schemes
enables us to construct the first communication-efficient structure-adaptive proactive MPC protocol for
dynamic GAS settings. By structure-adaptive, we mean that the choice of the MPC protocol to execute
in future rounds after the GAS is changed (as specified by an administrative entity) is chosen to ensure
communication-efficiency (the typical bottleneck in MPC). Furthermore, since such secure collaborative
computing may be long-lived, we consider the mobile adversary setting, often called the proactive security
setting. As our second contribution, we construct communication-efficient MPC protocols that can adapt
to the proactive security setting. Proactive security assumes that at each (well defined) period of time
the adversary corrupts different parties and over time may visit the entire system and corrupt all parties,
provided that in each period it controls groups obeying the GAS constraints. In our protocol, the shares
can be refreshed, meaning that parties receive new shares reconstructing the same secret, and some parties
who lost their shares because of the reboot/resetting can recover their shares. As our third contribution, we
consider another aspect of global long-term computations, namely, that of the dynamic groups. It is worth
pointing out that such a setting with dynamic groups and GAS was not dealt with in existing literature on
(proactive) MPC. In dynamic group settings, parties can be added and eliminated from the computation,
under different GAS restrictions. We extend our protocols to this additional dynamic group settings defined
by different GAS.
Latif AKÇAY, Berna ÖRS
Cryptography is one of the basic phenomena of security systems. However, some of the widely used public key cryptography algorithms can be broken by using quantum computers. Therefore, many post-quantum cryptography algorithms are proposed in recent years to handle this issue. NTRU is one of the most important of these quantum-safe algorithms. Besides the importance of cryptography algorithms, the architecture where they are implemented is also essential. In this study, we developed an NTRU public key cryptosystem application and designed several processors to compare them in many aspects. We address two dierent architectures in this work. The RISC-V is chosen as it is the most lately version of classical RISC architecture. As competitor to this, we preferred transport triggered architecture (TTA) which oers high level customization and scalability. Details of all dierent implementations and the test results obtained with them are shared and discussed.
Siddaramappa V, Ramesh K B
In digital world cryptographic algorithms
protect sensitive information from intruder during
communication. True random number generation is used
for Cryptography algorithms as key value encryption and
decryption process. To develop unbreakable algorithms
key as one important parameter for Cryptography .We
proposed DNA based True random number
generation.DNA is deoxyribonucleic acid chemical
molecule present in all living cells. DNA molecule consists
of 4 nucleotides A-adenine,T-Thymine,G-Guanine and CCytosine.
DNA molecules have uniqueness properties like
Each person in the world distinguish based on DNA
sequences and genes. The proposed algorithm pass NIST
SP 800-22 test suite for DNA based true random number
generation with highest Entropy,FFT,Block Frequency
and Linear Complexity.
Antonio Flórez Gutiérrez, Gaëtan Leurent, María Naya-Plasencia, Léo Perrin, André Schrottenloher, Ferdinand Sibleyras
Gimli is a family of cryptographic primitives (both a hash
function and an AEAD scheme) that has been selected for the second
round of the NIST competition for standardizing new lightweight designs.
The candidate Gimli is based on the permutation Gimli, which was
presented at CHES 2017. In this paper, we study the security of both
the permutation and the constructions that are based on it. We exploit
the slow diffusion in Gimli and its internal symmetries to build, for the
first time, a distinguisher on the full permutation of complexity 2^64 . We
also provide a practical distinguisher on 23 out of the full 24 rounds of
Gimli that has been implemented.
Next, we give (full state) collision and semi-free-start collision attacks on Gimli-Hash, reaching respectively up to 12 and 18 rounds. On the practical side, we compute a collision on 8-round Gimli-Hash. In the quantum setting, these attacks reach 2 more rounds. Finally, we perform the first study of linear trails in the permutation, and we propose differential-linear cryptanalysis that reach up to 17 rounds of Gimli.
Next, we give (full state) collision and semi-free-start collision attacks on Gimli-Hash, reaching respectively up to 12 and 18 rounds. On the practical side, we compute a collision on 8-round Gimli-Hash. In the quantum setting, these attacks reach 2 more rounds. Finally, we perform the first study of linear trails in the permutation, and we propose differential-linear cryptanalysis that reach up to 17 rounds of Gimli.
18 June 2020
Qian Guo, Thomas Johansson, Alexander Nilsson
In the implementation of post-quantum primitives, it is well known that all computations that handle secret information need to be implemented to run in constant time.
Using the Fujisaki-Okamoto transformation or any of its different variants, a CPA-secure primitive can be converted into an IND-CCA secure KEM. In this paper we show that although the transformation does not handle secret information apart from calls to the CPA-secure primitive, it has to be implemented in constant time. Namely, if the ciphertext comparison step in the transformation is leaking side-channel information, we can launch a key-recovery attack.
Several proposed schemes in round 2 of the NIST post-quantum standardization project are susceptible to the proposed attack and we develop and show the details of the attack on one of them, being FrodoKEM. It is implemented on the reference implementation of FrodoKEM, which is claimed to be secure against all timing attacks. Experiments show that the attack code is able to extract the secret key for all security levels using about \(2^{30}\) decapsulation calls.
Jan Richter-Brockmann, Tim Güneysu
Side-channel analysis and fault-injection attacks are known as serious threats to cryptographic hardware implementations and the combined protection against both is currently an open line of research. A promising countermeasure with considerable implementation overhead appears to be a mix of first-order secure Threshold Implementations and linear Error-Correcting Codes.
In this paper we employ for the first time the inherent structure of non-systematic codes as fault countermeasure which dynamically mutates the applied generator matrices to achieve a higher-order side-channel and fault-protected design. As a case study, we apply our scheme to the PRESENT block cipher that do not show any higher-order side-channel leakage after measuring 150 million power traces.
In this paper we employ for the first time the inherent structure of non-systematic codes as fault countermeasure which dynamically mutates the applied generator matrices to achieve a higher-order side-channel and fault-protected design. As a case study, we apply our scheme to the PRESENT block cipher that do not show any higher-order side-channel leakage after measuring 150 million power traces.