International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Papers from Journal of Cryptology 2020

Year
Venue
Title
2020
JOFC
${\varvec{1/p}}$-Secure Multiparty Computation without an Honest Majority and the Best of Both Worlds
A protocol for computing a functionality is secure if an adversary in this protocol cannot cause more harm than in an ideal computation, where parties give their inputs to a trusted party that returns the output of the functionality to all parties. In particular, in the ideal model, such computation is fair—if the corrupted parties get the output, then the honest parties get the output. Cleve (STOC 1986) proved that, in general, fairness is not possible without an honest majority. To overcome this impossibility, Gordon and Katz (Eurocrypt 2010) suggested a relaxed definition—1/ p -secure computation—which guarantees partial fairness. For two parties, they constructed 1/ p -secure protocols for functionalities for which the size of either their domain or their range is polynomial (in the security parameter). Gordon and Katz ask whether their results can be extended to multiparty protocols. We study 1/ p -secure protocols in the multiparty setting for general functionalities. Our main result is constructions of 1/ p -secure protocols that are resilient against any number of corrupted parties provided that the number of parties is constant and the size of the range of the functionality is at most polynomial (in the security parameter $${n}$$ n ). If fewer than 2/3 of the parties are corrupted, the size of the domain of each party is constant, and the functionality is deterministic, then our protocols are efficient even when the number of parties is $$\log \log {n}$$ log log n . On the negative side, we show that when the number of parties is super-constant, 1/ p -secure protocols are not possible when the size of the domain of each party is polynomial. Thus, our feasibility results for 1/ p -secure computation are essentially tight. We further motivate our results by constructing protocols with stronger guarantees: If in the execution of the protocol there is a majority of honest parties, then our protocols provide full security. However, if only a minority of the parties are honest, then our protocols are 1/ p -secure. Thus, our protocols provide the best of both worlds, where the 1/ p -security is only a fall-back option if there is no honest majority.
2020
JOFC
A Formal Security Analysis of the Signal Messaging Protocol
The Signal protocol is a cryptographic messaging protocol that provides end-to-end encryption for instant messaging in WhatsApp, Wire, and Facebook Messenger among many others, serving well over 1 billion active users. Signal includes several uncommon security properties (such as “future secrecy” or “post-compromise security”), enabled by a technique called ratcheting in which session keys are updated with every message sent. We conduct a formal security analysis of Signal’s initial extended triple Diffie–Hellman (X3DH) key agreement and Double Ratchet protocols as a multi-stage authenticated key exchange protocol. We extract from the implementation a formal description of the abstract protocol and define a security model which can capture the “ratcheting” key update structure as a multi-stage model where there can be a “tree” of stages, rather than just a sequence. We then prove the security of Signal’s key exchange core in our model, demonstrating several standard security properties. We have found no major flaws in the design and hope that our presentation and results can serve as a foundation for other analyses of this widely adopted protocol.
2020
JOFC
Adaptively Secure Non-interactive CCA-Secure Threshold Cryptosystems: Generic Framework and Constructions
In threshold cryptography, private keys are divided into n shares, each one of which is given to a different server in order to avoid single points of failure. In the case of threshold public-key encryption, at least $$t \le n$$ t ≤ n servers need to contribute to the decryption process. A threshold primitive is said robust if no coalition of t malicious servers can prevent remaining honest servers from successfully completing private key operations. Non-interactive schemes, considered the most practical ones, allow servers to contribute to decryption without interactions. So far, most non-interactive threshold cryptosystems were only proved secure against static corruptions. In the adaptive corruption scenario (where the adversary can corrupt servers at any time, based on its complete view), all existing robust threshold encryption schemes that also resist chosen-ciphertext attacks till recently require interaction in the decryption phase. A very specific method (in composite order groups) for getting rid of interaction was recently suggested, leaving the question of more generic frameworks and constructions with better security and, in particular, better flexibility (i.e., compatibility with distributed key generation). This paper advances the state of the art and describes a general construction of adaptively secure robust non-interactive threshold cryptosystems with chosen-ciphertext security. We define the novel notion of all-but-one perfectly sound threshold hash proof systems that can be seen as (threshold) hash proof systems with publicly verifiable and simulation-sound proofs. We show that this notion generically implies threshold cryptosystems combining the aforementioned properties. Then, we provide efficient instantiations under well-studied assumptions in bilinear groups (e.g., in such groups of prime order). These instantiations have a tighter security proof in the single-challenge setting and are indeed compatible with distributed key generation protocols.
2020
JOFC
Block Cipher Invariants as Eigenvectors of Correlation Matrices
A new approach to invariant subspaces and nonlinear invariants is developed. This results in both theoretical insights and practical attacks on block ciphers. It is shown that, with minor modifications to some of the round constants, Midori-64 has a nonlinear invariant with $$2^{96} + 2^{64}$$ 2 96 + 2 64 corresponding weak keys. Furthermore, this invariant corresponds to a linear hull with maximal correlation. By combining the new invariant with integral cryptanalysis, a practical key-recovery attack on ten rounds of unmodified Midori-64 is obtained. The attack works for $$2^{96}$$ 2 96 weak keys and irrespective of the choice of round constants. The data complexity is $$1.25 \cdot 2^{21}$$ 1.25 · 2 21 chosen plaintexts, and the computational cost is dominated by $$2^{56}$$ 2 56 block cipher calls. The validity of the attack is verified by means of experiments.
2020
JOFC
Compact Adaptively Secure ABE for ${\textsf {NC}}^{1}$ from k-Lin
We present compact attribute-based encryption (ABE) schemes for $${\textsf {NC}}^{1}$$ NC 1 that are adaptively secure under the k -Lin assumption with polynomial security loss. Our KP-ABE scheme achieves ciphertext size that is linear in the attribute length and independent of the policy size even in the many-use setting, and we achieve an analogous efficiency guarantee for CP-ABE. This resolves the central open problem posed by Lewko and Waters (CRYPTO 2011). Previous adaptively secure constructions either impose an attribute “one-use restriction” (or the ciphertext size grows with the policy size) or require q -type assumptions.
2020
JOFC
Continuously Non-malleable Codes in the Split-State Model
Non-malleable codes (Dziembowski et al., ICS’10 and J. ACM’18) are a natural relaxation of error correcting/detecting codes with useful applications in cryptography. Informally, a code is non-malleable if an adversary trying to tamper with an encoding of a message can only leave it unchanged or modify it to the encoding of an unrelated value. This paper introduces continuous non-malleability, a generalization of standard non-malleability where the adversary is allowed to tamper continuously with the same encoding. This is in contrast to the standard definition of non-malleable codes, where the adversary can only tamper a single time. The only restriction is that after the first invalid codeword is ever generated, a special self-destruct mechanism is triggered and no further tampering is allowed; this restriction can easily be shown to be necessary. We focus on the split-state model, where an encoding consists of two parts and the tampering functions can be arbitrary as long as they act independently on each part. Our main contributions are outlined below. We show that continuous non-malleability in the split-state model is impossible without relying on computational assumptions. We construct a computationally secure split-state code satisfying continuous non-malleability in the common reference string (CRS) model. Our scheme can be instantiated assuming the existence of collision-resistant hash functions and (doubly enhanced) trapdoor permutations, but we also give concrete instantiations based on standard number-theoretic assumptions. We revisit the application of non-malleable codes to protecting arbitrary cryptographic primitives against related-key attacks. Previous applications of non-malleable codes in this setting required perfect erasures and the adversary to be restricted in memory. We show that continuously non-malleable codes allow to avoid these restrictions.
2020
JOFC
Cryptanalysis of OCB2: Attacks on Authenticity and Confidentiality
We present practical attacks on OCB2. This mode of operation of a blockcipher was designed with the aim to provide particularly efficient and provably secure authenticated encryption services, and since its proposal about 15 years ago it belongs to the top performers in this realm. OCB2 was included in an ISO standard in 2009. An internal building block of OCB2 is the tweakable blockcipher obtained by operating a regular blockcipher in $${\text {XEX}}^*$$ XEX ∗  mode. The latter provides security only when evaluated in accordance with certain technical restrictions that, as we note, are not always respected by OCB2. This leads to devastating attacks against OCB2’s security promises: We develop a range of very practical attacks that, amongst others, demonstrate universal forgeries and full plaintext recovery. We complete our report with proposals for (provably) repairing OCB2. As a direct consequence of our findings, OCB2 is currently in a process of removal from ISO standards. Our attacks do not apply to OCB1 and OCB3, and our privacy attacks on OCB2 require an active adversary.
2020
JOFC
CryptHOL: Game-Based Proofs in Higher-Order Logic
Game-based proofs are a well-established paradigm for structuring security arguments and simplifying their understanding. We present a novel framework, CryptHOL, for rigorous game-based proofs that is supported by mechanical theorem proving. CryptHOL is based on a new semantic domain with an associated functional programming language for expressing games. We embed our framework in the Isabelle/HOL theorem prover and, using the theory of relational parametricity, we tailor Isabelle’s existing proof automation to game-based proofs. By basing our framework on a conservative extension of higher-order logic and providing automation support, the resulting proofs are trustworthy and comprehensible, and the framework is extensible and widely applicable. We evaluate our framework by formalising different game-based proofs from the literature and comparing the results with existing formal-methods tools.
2020
JOFC
Efficient and Scalable Universal Circuits
A universal circuit (UC) can be programmed to simulate any circuit up to a given size  n by specifying its program inputs. It provides elegant solutions in various application scenarios, e.g., for private function evaluation (PFE) and for improving the flexibility of attribute-based encryption schemes. The asymptotic lower bound for the size of a UC is $$\Omega (n\log n)$$ Ω ( n log n ) , and Valiant (STOC’76) provided two theoretical constructions, the so-called 2-way and 4-way UCs (i.e., recursive constructions with 2 and 4 substructures), with asymptotic sizes $${\sim }\,5n\log _2n$$ ∼ 5 n log 2 n and $${\sim }\,4.75n\log _2n$$ ∼ 4.75 n log 2 n , respectively. In this article, we present and extend our results published in (Kiss and Schneider EUROCRYPT’16) and (Günther et al. ASIACRYPT’17). We validate the practicality of Valiant’s UCs by realizing the 2-way and 4-way UCs in our modular open-source implementation. We also provide an example implementation for PFE using these size-optimized UCs. We propose a 2/4-hybrid approach that combines the 2-way and the 4-way UCs in order to minimize the size of the resulting UC. We realize that the bottleneck in universal circuit generation and programming becomes the memory consumption of the program since the whole structure of size $${\mathcal {O}}(n\log n)$$ O ( n log n ) is handled by the algorithms in memory. In this work, we overcome this by designing novel scalable algorithms for the UC generation and programming. Both algorithms use only $${\mathcal {O}}(n)$$ O ( n ) memory at any point in time. We prove the practicality of our scalable design with a scalable proof-of-concept implementation for generating Valiant’s 4-way UC. We note that this can be extended to work with optimized building blocks analogously. Moreover, we substantially improve the size of our UCs by including and implementing the recent optimization of Zhao et al. (ASIACRYPT’19) that reduces the asymptotic size of the 4-way UC to  $${\sim }\,4.5n\log _2n$$ ∼ 4.5 n log 2 n . Furthermore, we include their optimization in the implementation of our 2/4-hybrid UC which yields the smallest UC construction known so far.
2020
JOFC
Efficient Verifiable Delay Functions
We construct a verifiable delay function (VDF). A VDF is a function whose evaluation requires running a given number of sequential steps, yet the result can be efficiently verified. They have applications in decentralised systems, such as the generation of trustworthy public randomness in a trustless environment, or resource-efficient blockchains. To construct our VDF, we actually build a trapdoor VDF. A trapdoor VDF is essentially a VDF which can be evaluated efficiently by parties who know a secret (the trapdoor). By setting up this scheme in a way that the trapdoor is unknown (not even by the party running the setup, so that there is no need for a trusted setup environment), we obtain a simple VDF. Our construction is based on groups of unknown order such as an RSA group or the class group of an imaginary quadratic field. The output of our construction is very short (the result and the proof of correctness are each a single element of the group), and the verification of correctness is very efficient.
2020
JOFC
Fast Multi-precision Multiplication for Public-Key Cryptography on Embedded Microprocessors
Multi-precision multiplication is one of the most fundamental operations on microprocessors to allow public-key cryptography such as RSA and elliptic curve cryptography. In this paper, we present a novel multiplication technique that increases the performance of multiplication by sophisticated caching of operands. Our method significantly reduces the number of needed load instructions which is usually one of the most expensive operations on modern processors. We evaluate our new technique on an 8-b ATmega128 and a 32-b ARM7TDMI microcontroller and compare the results with existing solutions. For the ATmega128, our implementation needs only 2395 clock cycles for a 160-b multiplication. The number of required load instructions is reduced from 167 (needed for the best known hybrid multiplication) to only 80. On the ARM7TDMI, our implementation needs only 281 clock cycles as opposed to 357. For both platforms, the proposed technique outperforms related work by a factor of about 10–23%. We also show that the method scales very well even for larger integer sizes (required for RSA) and limited register sets. It fully complies with existing multiply–accumulate instructions that are integrated in most of the available processors.
2020
JOFC
Foundations of Fully Dynamic Group Signatures
Group signatures allow members of a group to anonymously sign on behalf of the group. Membership is administered by a designated group manager. The group manager can also reveal the identity of a signer if and when needed to enforce accountability and deter abuse. For group signatures to be applicable in practice, they need to support fully dynamic groups, i.e., users may join and leave at any time. Existing security definitions for fully dynamic group signatures are informal, have shortcomings, and are mutually incompatible. We fill the gap by providing a formal rigorous security model for fully dynamic group signatures. Our model is general and is not tailored toward a specific design paradigm and can therefore, as we show, be used to argue about the security of different existing constructions following different design paradigms. Our definitions are stringent and when possible incorporate protection against maliciously chosen keys. We consider both the case where the group management and tracing signatures are administered by the same authority, i.e., a single group manager, and also the case where those roles are administered by two separate authorities, i.e., a group manager and an opening authority. We also show that a specialization of our model captures existing models for static and partially dynamic schemes. In the process, we identify a subtle gap in the security achieved by group signatures using revocation lists. We show that in such schemes new members achieve a slightly weaker notion of traceability. The flexibility of our security model allows to capture such relaxation of traceability.
2020
JOFC
Identification Protocols and Signature Schemes Based on Supersingular Isogeny Problems
We present signature schemes whose security relies on computational assumptions relating to isogeny graphs of supersingular elliptic curves. We give two schemes, both of them based on interactive identification protocols. The first identification protocol is due to De Feo, Jao and Plût. The second one, and the main contribution of the paper, makes novel use of an algorithm of Kohel, Lauter, Petit and Tignol for the quaternion version of the $$\ell $$ ℓ -isogeny problem, for which we provide a more complete description and analysis, and is based on a more standard and potentially stronger computational problem. Both identification protocols lead to signatures that are existentially unforgeable under chosen message attacks in the random oracle model using the well-known Fiat-Shamir transform, and in the quantum random oracle model using another transform due to Unruh. A version of the first signature scheme was independently published by Yoo, Azarderakhsh, Jalali, Jao and Soukharev. This is the full version of a paper published at ASIACRYPT 2017.
2020
JOFC
Joint State Composition Theorems for Public-Key Encryption and Digital Signature Functionalities with Local Computation
In frameworks for universal composability, complex protocols can be built from sub-protocols in a modular way using composition theorems. However, as first pointed out and studied by Canetti and Rabin, this modular approach often leads to impractical implementations. For example, when using a functionality for digital signatures within a more complex protocol, parties have to generate new verification and signing keys for every session of the protocol. This motivates to generalize composition theorems to so-called joint state (composition) theorems, where different copies of a functionality may share some state, e.g., the same verification and signing keys. In this paper, we present a joint state theorem which is more general than the original theorem of Canetti and Rabin, for which several problems and limitations are pointed out. We apply our theorem to obtain joint state realizations for three functionalities: public-key encryption, replayable public-key encryption, and digital signatures. Unlike most other formulations, our functionalities model that ciphertexts and signatures are computed locally, rather than being provided by the adversary. To obtain the joint state realizations, the functionalities have to be designed carefully. Other formulations proposed in the literature are shown to be unsuitable. Our work is based on the IITM model. Our definitions and results demonstrate the expressivity and simplicity of this model. For example, unlike Canetti’s UC model, in the IITM model no explicit joint state operator needs to be defined and the joint state theorem follows immediately from the composition theorem in the IITM model.
2020
JOFC
Kummer for Genus One Over Prime-Order Fields
This work considers the problem of fast and secure scalar multiplication using curves of genus one defined over a field of prime order. Previous work by Gaudry and Lubicz (Finite Fields Appl 15(2):246–260, 2009 ) had suggested the use of the associated Kummer line to speed up scalar multiplication. In the present work, we explore this idea in detail. The first task is to obtain an elliptic curve in Legendre form which satisfies necessary security conditions such that the associated Kummer line has small parameters and a base point with small coordinates. It turns out that the ladder step on the Kummer line supports parallelism and can be implemented very efficiently in constant time using the single-instruction multiple-data (SIMD) operations available in modern processors. For the 128-bit security level, this work presents three Kummer lines denoted as $$K_1:=\mathsf{KL2519(81,20)}$$ K 1 : = KL 2519 ( 81 , 20 ) , $$K_2:=\mathsf{KL25519(82,77)}$$ K 2 : = KL 25519 ( 82 , 77 ) and $$K_3:=\mathsf{KL2663(260,139)}$$ K 3 : = KL 2663 ( 260 , 139 ) over the three primes $$2^{251}-9$$ 2 251 - 9 , $$2^{255}-19$$ 2 255 - 19 and $$2^{266}-3$$ 2 266 - 3 , respectively. Implementations of scalar multiplications for all three Kummer lines using Intel intrinsics have been done, and the code is publicly available. Timing results on the Skylake and the Haswell processors of Intel indicate that both fixed base and variable base scalar multiplications for $$K_1$$ K 1 and $$K_2$$ K 2 are faster than those achieved by Sandy2x , which is a highly optimised SIMD implementation in assembly of the well-known Curve25519 . On Skylake, both fixed base and variable base scalar multiplications for $$K_3$$ K 3 are faster than Sandy2x , whereas on Haswell, fixed base scalar multiplication for $$K_3$$ K 3 is faster than Sandy2x while variable base scalar multiplication for both $$K_3$$ K 3 and Sandy2x takes roughly the same time. In practical terms, the particular Kummer lines that are introduced in this work are serious candidates for deployment and standardisation. We further illustrate the usefulness of the proposed Kummer lines by instantiating the quotient Digital Signature Algorithm on all the three Kummer lines.
2020
JOFC
Learning Strikes Again: The Case of the DRS Signature Scheme
Lattice signature schemes generally require particular care when it comes to preventing secret information from leaking through signature transcript. For example, the Goldreich–Goldwasser–Halevi (GGH) signature scheme and the NTRUSign scheme were completely broken by the parallelepiped-learning attack of Nguyen and Regev (Eurocrypt 2006). Several heuristic countermeasures were also shown vulnerable to similar statistical attacks. At PKC 2008, Plantard, Susilo and Win proposed a new variant of GGH, informally arguing resistance to such attacks. Based on this variant, Plantard, Sipasseuth, Dumondelle and Susilo proposed a concrete signature scheme, called DRS, that is in the round 1 of the NIST post-quantum cryptography project. In this work, we propose yet another statistical attack and demonstrate a weakness of the DRS scheme: one can recover some partial information of the secret key from sufficiently many signatures. One difficulty is that, due to the DRS reduction algorithm, the relation between the statistical leak and the secret seems more intricate. We work around this difficulty by training a statistical model, using a few features that we designed according to a simple heuristic analysis. While we only recover partial secret coefficients, this information is easily exploited by lattice attacks, significantly decreasing their complexity. Concretely, we claim that, provided that $$100\,000$$ 100 000 signatures are available, the secret key may be recovered using BKZ-138 for the first set of DRS parameters submitted to the NIST. This puts the security level of this parameter set below 80-bits (maybe even 70-bits), to be compared to an original claim of 128-bits. Furthermore, we review the DRS v2 scheme that is proposed to resist above statistical attack. For this countermeasure, while one may not recover partial secret coefficients exactly by learning, it seems feasible to gain some information on the secret key. Exploiting this information, we can still effectively reduce the cost of lattice attacks.
2020
JOFC
Locally Decodable and Updatable Non-malleable Codes and Their Applications
Non-malleable codes, introduced as a relaxation of error-correcting codes by Dziembowski, Pietrzak, and Wichs (ICS ’10), provide the security guarantee that the message contained in a tampered codeword is either the same as the original message or is set to an unrelated value. Various applications of non-malleable codes have been discovered, and one of the most significant applications among these is the connection with tamper-resilient cryptography. There is a large body of work considering security against various classes of tampering functions, as well as non-malleable codes with enhanced features such as leakage resilience . In this work, we propose combining the concepts of non-malleability , leakage resilience , and locality in a coding scheme. The contribution of this work is threefold: 1. As a conceptual contribution, we define a new notion of locally decodable and updatable non-malleable code that combines the above properties. 2. We present two simple and efficient constructions achieving our new notion with different levels of security. 3. We present an important application of our new tool—securing RAM computation against memory tampering and leakage attacks. This is analogous to the usage of traditional non-malleable codes to secure implementations in the circuit model against memory tampering and leakage attacks.
2020
JOFC
Low Cost Constant Round MPC Combining BMR and Oblivious Transfer
In this work, we present two new actively secure, constant-round multi-party computation (MPC) protocols with security against all-but-one corruptions. Our protocols both start with an actively secure MPC protocol, which may have linear round complexity in the depth of the circuit, and compile it into a constant-round protocol based on garbled circuits, with very low overhead. 1. Our first protocol takes a generic approach using any secret-sharing-based MPC protocol for binary circuits, and a correlated oblivious transfer functionality. 2. Our second protocol builds on secret-sharing-based MPC with information-theoretic MACs. This approach is less flexible, being based on a specific form of MPC, but requires no additional oblivious transfers to compute the garbled circuit. In both approaches, the underlying secret-sharing-based protocol is only used for one actively secure $$\mathbb {F}_2$$ F 2 multiplication per AND gate . An interesting consequence of this is that, with current techniques, constant-round MPC for binary circuits is not much more expensive than practical, non-constant-round protocols. We demonstrate the practicality of our second protocol with an implementation and perform experiments with up to 9 parties securely computing the AES and SHA-256 circuits. Our running times improve upon the best possible performance with previous protocols in this setting by 60 times.
2020
JOFC
Meet-in-the-Middle Attacks and Structural Analysis of Round-Reduced PRINCE
NXP Semiconductors and its academic partners challenged the cryptographic community with finding practical attacks on the block cipher they designed, PRINCE. Instead of trying to attack as many rounds as possible using attacks which are usually impractical despite being faster than brute force, the challenge invites cryptographers to find practical attacks and encourages them to actually implement them. In this paper, we present new attacks on round-reduced PRINCE including the ones which won the challenge in the 4-, 6- and 8-round categories—the highest for which winners were identified. Our first attacks rely on a meet-in-the-middle approach and break up to ten rounds of the cipher. We also describe heuristic methods we used to find practical SAT-based and differential attacks. Finally, we also present an analysis of the cycle structure of the internal rounds of PRINCE leading both to a low complexity distinguisher for 4-round PRINCE-core and an alternative representation of the cipher valid in particular contexts and which highlights, in these cases, a poor diffusion.
2020
JOFC
Multilinear Maps from Obfuscation
We provide constructions of multilinear groups equipped with natural hard problems from indistinguishability obfuscation, homomorphic encryption, and NIZKs. This complements known results on the constructions of indistinguishability obfuscators from multilinear maps in the reverse direction. We provide two distinct, but closely related constructions and show that multilinear analogues of the $${\text {DDH}} $$ DDH assumption hold for them. Our first construction is symmetric and comes with a $$\kappa $$ κ -linear map $$\mathbf{e }: {{\mathbb {G}}}^\kappa \longrightarrow {\mathbb {G}}_T$$ e : G κ ⟶ G T for prime-order groups $${\mathbb {G}}$$ G and $${\mathbb {G}}_T$$ G T . To establish the hardness of the $$\kappa $$ κ -linear $${\text {DDH}} $$ DDH problem, we rely on the existence of a base group for which the $$\kappa $$ κ -strong $${\text {DDH}} $$ DDH assumption holds. Our second construction is for the asymmetric setting, where $$\mathbf{e }: {\mathbb {G}}_1 \times \cdots \times {\mathbb {G}}_{\kappa } \longrightarrow {\mathbb {G}}_T$$ e : G 1 × ⋯ × G κ ⟶ G T for a collection of $$\kappa +1$$ κ + 1 prime-order groups $${\mathbb {G}}_i$$ G i and $${\mathbb {G}}_T$$ G T , and relies only on the 1-strong $${\text {DDH}} $$ DDH assumption in its base group. In both constructions, the linearity $$\kappa $$ κ can be set to any arbitrary but a priori fixed polynomial value in the security parameter. We rely on a number of powerful tools in our constructions: probabilistic indistinguishability obfuscation, dual-mode NIZK proof systems (with perfect soundness, witness-indistinguishability, and zero knowledge), and additively homomorphic encryption for the group $$\mathbb {Z}_N^{+}$$ Z N + . At a high level, we enable “bootstrapping” multilinear assumptions from their simpler counterparts in standard cryptographic groups and show the equivalence of PIO and multilinear maps under the existence of the aforementioned primitives.
2020
JOFC
Non-malleable Encryption: Simpler, Shorter, Stronger
One approach toward basing public-key encryption (PKE) schemes on weak and credible assumptions is to build “stronger” or more general schemes generically from “weaker” or more restricted ones. One particular line of work in this context was initiated by Myers and Shelat (FOCS ’09) and continued by Hohenberger, Lewko, and Waters (Eurocrypt ’12), who provide constructions of multi-bit CCA-secure PKE from single-bit CCA-secure PKE. It is well known that encrypting each bit of a plaintext string independently is not CCA-secure—the resulting scheme is malleable . We therefore investigate whether this malleability can be dealt with using the conceptually simple approach of applying a suitable non-malleable code (Dziembowski et al., ICS ’10) to the plaintext and subsequently encrypting the resulting codeword bit by bit. We find that an attacker’s ability to ask multiple decryption queries requires that the underlying code be continuously non-malleable (Faust et al., TCC ’14). Since, as we show, this flavor of non-malleability can only be achieved if the code is allowed to “self-destruct,” the resulting scheme inherits this property and therefore only achieves a weaker variant of CCA security. We formalize this new notion of so-called indistinguishability under self-destruct attacks (IND-SDA) as CCA security with the restriction that the decryption oracle stops working once the attacker submits an invalid ciphertext. We first show that the above approach based on non-malleable codes yields a solution to the problem of domain extension for IND-SDA-secure PKE, provided that the underlying code is continuously non-malleable against (a reduced form of) bit-wise tampering. Then, we prove that the code of Dziembowski et al. is actually already continuously non-malleable against bit-wise tampering. We further investigate the notion of security under self-destruct attacks and combine IND-SDA security with non-malleability under chosen-ciphertext attacks (NM-CPA) to obtain the strictly stronger notion of non-malleability under self-destruct attacks (NM-SDA) . We show that NM-SDA security can be obtained from basic IND-CPA security by means of a black-box construction based on the seminal work by Choi et al. (TCC ’08). Finally, we provide a domain extension technique for building a multi-bit NM-SDA scheme from a single-bit NM-SDA scheme. To achieve this goal, we define and construct a novel type of continuous non-malleable code, called secret-state NMC , since, as we show, standard continuous NMCs are insufficient for the natural “encode-then-encrypt-bit-by-bit” approach to work.
2020
JOFC
Oblivious Sampling with Applications to Two-Party k-Means Clustering
The k -means clustering problem is one of the most explored problems in data mining. With the advent of protocols that have proven to be successful in performing single database clustering, the focus has shifted in recent years to the question of how to extend the single database protocols to a multiple database setting. To date, there have been numerous attempts to create specific multiparty k -means clustering protocols that protect the privacy of each database, but according to the standard cryptographic definitions of “privacy-protection”, so far all such attempts have fallen short of providing adequate privacy. In this paper, we describe a Two-Party k -Means Clustering Protocol that guarantees privacy against an honest-but-curious adversary, and is more efficient than utilizing a general multiparty “compiler” to achieve the same task. In particular, a main contribution of our result is a way to compute efficiently multiple iterations of k -means clustering without revealing the intermediate values. To achieve this, we describe a technique for performing two-party division securely and also introduce a novel technique allowing two parties to securely sample uniformly at random from an unknown domain size. The resulting Division Protocol and Random Value Protocol are of use to any protocol that requires the secure computation of a quotient or random sampling. Our techniques can be realized based on the existence of any semantically secure homomorphic encryption scheme. For concreteness, we describe our protocol based on Paillier Homomorphic Encryption scheme (see Paillier in Advances in: cryptology EURO-CRYPT’99 proceedings, LNCS 1592, pp 223–238, 1999). We will also demonstrate that our protocol is efficient in terms of communication, remaining competitive with existing protocols (such as Jagannathan and Wright in: KDD’05, pp 593–599, 2005) that fail to protect privacy.
2020
JOFC
On the Power of Secure Two-Party Computation
Ishai, Kushilevitz, Ostrovsky and Sahai (STOC 2007 ; SIAM J Comput 39(3):1121–1152, 2009 ) introduced the powerful “MPC-in-the-head” technique that provided a general transformation of information-theoretic MPC protocols secure against passive adversaries to a ZK proof in a “black-box” way. In this work, we extend this technique and provide a generic transformation of any semi-honest secure two-party computation (2PC) protocol (with mild adaptive security guarantees) in the so-called oblivious-transfer hybrid model to an adaptive ZK proof for any $$\textsf {NP}$$ NP language, in a “black-box” way assuming only one-way functions. Our basic construction based on Goldreich–Micali–Wigderson’s 2PC protocol yields an adaptive ZK proof with communication complexity proportional to quadratic in the size of the circuit implementing the $$\textsf {NP}$$ NP relation. Previously such proofs relied on an expensive Karp reduction of the $$\textsf {NP}$$ NP language to Graph Hamiltonicity [Lindell and Zarosim (TCC 2009 ; J Cryptol 24(4):761–799, 2011 )]. As an application of our techniques, we show how to obtain a ZK proof with an “input-delayed” property for any $$\textsf {NP}$$ NP language without relying on expensive Karp reductions that is black box in the underlying one-way function. Namely, the input-delayed property allows the honest prover’s algorithm to receive the actual statement to be proved only in the final round. We further generalize this to obtain a “commit-and-prove” protocol with the same property where the prover commits to a witness w in the second message and proves a statement x regarding the witness w in zero-knowledge where the statement is determined only in the last round. This improves a previous construction of Lapidot and Shamir (Crypto 1990 ) that was designed specifically for the Graph Hamiltonicity problem and relied on the underlying primitives in a non-black-box way. Additionally, we provide a general transformation to construct a randomized encoding of a function f from any 2PC protocol that securely computes a related functionality (in a black-box way) from one-way functions. We show that if the 2PC protocol has mild adaptive security guarantees (which are satisfied by both the Yao’s and GMW’s protocol), then the resulting randomized encoding can be decomposed to an offline/online encoding.
2020
JOFC
PCPs and the Hardness of Generating Synthetic Data
Assuming the existence of one-way functions, we show that there is no polynomial-time differentially private algorithm $${\mathcal {A}}$$ A that takes a database $$D\in (\{0,1\}^d)^n$$ D ∈ ( { 0 , 1 } d ) n and outputs a “synthetic database” $${\hat{D}}$$ D ^ all of whose two-way marginals are approximately equal to those of D . (A two-way marginal is the fraction of database rows $$x\in \{0,1\}^d$$ x ∈ { 0 , 1 } d with a given pair of values in a given pair of columns.) This answers a question of Barak et al. (PODS ‘07), who gave an algorithm running in time $$\mathrm {poly}(n,2^d)$$ poly ( n , 2 d ) . Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by Dwork et al., STOC ‘09) with encodings based on the PCP theorem. We also present both negative and positive results for generating “relaxed” synthetic data, where the fraction of rows in D satisfying a predicate c are estimated by applying c to each row of $${\hat{D}}$$ D ^ and aggregating the results in some way.
2020
JOFC
Practical Collision Attacks against Round-Reduced SHA-3
The Keccak hash function is the winner of the SHA-3 competition (2008–2012) and became the SHA-3 standard of NIST in 2015. In this paper, we focus on practical collision attacks against round-reduced SHA-3 and some Keccak variants. Following the framework developed by Dinur et al. at FSE 2012 where 4-round collisions were found by combining 3-round differential trails and 1-round connectors, we extend the connectors to up to three rounds and hence achieve collision attacks for up to 6 rounds. The extension is possible thanks to the large degree of freedom of the wide internal state. By linearizing S-boxes of the first round, the problem of finding solutions of 2-round connectors is converted to that of solving a system of linear equations. When linearization is applied to the first two rounds, 3-round connectors become possible. However, due to the quick reduction in the degree of freedom caused by linearization, the connector succeeds only when the 3-round differential trails satisfy some additional conditions. We develop dedicated strategies for searching differential trails and find that such special differential trails indeed exist. To summarize, we obtain the first real collisions on six instances, including three round-reduced instances of SHA-3 , namely 5-round SHAKE128 , SHA3 -224 and SHA3 -256, and three instances of Keccak contest, namely Keccak [1440, 160, 5, 160], Keccak [640, 160, 5, 160] and Keccak [1440, 160, 6, 160], improving the number of practically attacked rounds by two. It is remarked that the work here is still far from threatening the security of the full 24-round SHA-3 family.
2020
JOFC
Reusable Fuzzy Extractors for Low-Entropy Distributions
Fuzzy extractors (Dodis et al., in Advances in cryptology—EUROCRYPT 2014, Springer, Berlin, 2014, pp 93–110) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, in Proceedings of the 11th ACM conference on computer and communications security, CCS, ACM, New York, 2004, pp 82–91) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person’s biometric is enrolled with multiple unrelated organizations). We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated. The extractor works for binary strings with Hamming noise; it achieves computational security under the existence of digital lockers (Canetti and Dakdouk, in Advances in cryptology—EUROCRYPT 2008, Springer, Berlin, 2008, pp 489–508). It is simple and tolerates near-linear error rates. Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates—lower than those supported by prior (nonreusable) constructions—assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. Structure beyond entropy is necessary to support distributions with low entropy rates. We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, building a computationally secure and an information-theoretically secure construction for large-alphabet sources.
2020
JOFC
Revisiting the Wrong-Key-Randomization Hypothesis
Linear cryptanalysis is considered to be one of the strongest techniques in the cryptanalyst’s arsenal. In most cases, Matsui’s Algorithm 2 is used for the key recovery part of the attack. The success rate analysis of this algorithm is based on an assumption regarding the bias of a linear approximation for a wrong key, known as the wrong-key-randomization hypothesis. This hypothesis was refined by Bogdanov and Tischhauser to take into account the stochastic nature of the bias for a wrong key. We provide further refinements to the analysis of Matsui’s Algorithm 2 by considering sampling without replacement. This paper derives the distribution of the observed bias for wrong keys when sampling is done without replacement and shows that less data are required in this scenario. It also develops formulas for the success probability and the required data complexity when this approach is taken. The formulas predict that the success probability may reach a peak and then decrease as more pairs are considered. We provide a new explanation for this behavior and derive the conditions for encountering it. We empirically verify our results and compare them to previous work.
2020
JOFC
Solving LPN Using Covering Codes
We present a new algorithm for solving the LPN problem. The algorithm has a similar form as some previous methods, but includes a new key step that makes use of approximations of random words to a nearest codeword in a linear code. It outperforms previous methods for many parameter choices. In particular, we can now solve the $$(512,\frac{1}{8})$$ ( 512 , 1 8 ) LPN instance with complexity less than $$2^{80}$$ 2 80 operations in expectation, indicating that cryptographic schemes like HB variants and LPN-C should increase their parameter size for 80-bit security.
2020
JOFC
Spin Me Right Round Rotational Symmetry for FPGA-Specific AES: Extended Version
The effort in reducing the area of AES implementations has largely been focused on application-specific integrated circuits (ASICs) in which a tower field construction leads to a small design of the AES S-box. In contrast, a naive implementation of the AES S-box has been the status-quo on field-programmable gate arrays (FPGAs). A similar discrepancy holds for masking schemes—a well-known side-channel analysis countermeasure—which are commonly optimized to achieve minimal area in ASICs. In this paper, we demonstrate a representation of the AES S-box exploiting rotational symmetry which leads to a 50% reduction in the area footprint on FPGA devices. We present new AES implementations which improve on the state-of-the-art and explore various trade-offs between area and latency. For instance, at the cost of increasing 4.5 times the latency, one of our design variants requires 25% less look-up tables (LUTs) than the smallest known AES on Xilinx FPGAs by Sasdrich and Güneysu at ASAP 2016. We further explore the protection of such implementations against side-channel attacks. We introduce a generic methodology for masking any n -bit Boolean functions of degree t with protection order d . The methodology is exact for first-order and heuristic for higher orders. Its application to our new construction of the AES S-box allows us to improve previous results and introduce the smallest first-order masked AES implementation on Xilinx FPGAs, to date.
2020
JOFC
Statistical Concurrent Non-Malleable Zero-Knowledge from One-Way Functions
Concurrent non-malleable zero-knowledge ( $$\mathrm {CNMZK}$$ CNMZK ) protocols are zero-knowledge protocols that provides security even when adversaries interact with multiple provers and verifiers simultaneously. It is known that $$\mathrm {CNMZK}$$ CNMZK arguments for $$\mathcal {NP}$$ NP can be constructed in the plain model. Furthermore, it was recently shown that statistical $$\mathrm {CNMZK}$$ CNMZK arguments for $$\mathcal {NP}$$ NP can also be constructed in the plain model. However, although the former requires only the existence of one-way functions, the latter requires the DDH assumption. In this paper, we construct a statistical $$\mathrm {CNMZK}$$ CNMZK argument for $$\mathcal {NP}$$ NP assuming only the existence of one-way functions. The security is proven via black-box simulation, and the round complexity is $$\mathsf {poly}(n)$$ poly ( n ) . Under the existence of collision-resistant hash functions, the round complexity is reduced to $$\omega (\log n)$$ ω ( log n ) , which is essentially optimal for black-box concurrent zero-knowledge protocols.
2020
JOFC
The IITM Model: A Simple and Expressive Model for Universal Composability
The universal composability paradigm allows for the modular design and analysis of cryptographic protocols. It has been widely and successfully used in cryptography. However, devising a coherent yet simple and expressive model for universal composability is, as the history of such models shows, highly non-trivial. For example, several partly severe problems have been pointed out in the literature for the UC model. In this work, we propose a coherent model for universal composability, called the IITM model (“Inexhaustible Interactive Turing Machine”). A main feature of the model is that it is stated without a priori fixing irrelevant details, such as a specific way of addressing of machines by session and party identifiers, a specific modeling of corruption, or a specific protocol hierarchy. In addition, we employ a very general notion of runtime. All reasonable protocols and ideal functionalities should be expressible based on this notion in a direct and natural way, and without tweaks, such as (artificial) padding of messages or (artificially) adding extra messages. Not least because of these features, the model is simple and expressive. Also the general results that we prove, such as composition theorems, hold independently of how such details are fixed for concrete applications. Being inspired by other models for universal composability, in particular the UC model and because of the flexibility and expressivity of the IITM model, conceptually, results formulated in these models directly carry over to the IITM model.
2020
JOFC
Tight Security of Cascaded LRW2
At CRYPTO ’12, Landecker et al. introduced the cascaded LRW2 (or CLRW2 ) construction and proved that it is a secure tweakable block cipher up to roughly $$ 2^{2n/3} $$ 2 2 n / 3 queries. Recently, Mennink has presented a distinguishing attack on CLRW2 in $$ 2n^{1/2}2^{3n/4} $$ 2 n 1 / 2 2 3 n / 4 queries. In the same paper, he discussed some non-trivial bottlenecks in proving tight security bound, i.e., security up to $$ 2^{3n/4} $$ 2 3 n / 4 queries. Subsequently, he proved security up to $$ 2^{3n/4} $$ 2 3 n / 4 queries for a variant of CLRW2 using 4-wise independent AXU assumption and the restriction that each tweak value occurs at most $$ 2^{n/4} $$ 2 n / 4 times. Moreover, his proof relies on a version of mirror theory which is yet to be publicly verified. In this paper, we resolve the bottlenecks in Mennink’s approach and prove that the original CLRW2 is indeed a secure tweakable block cipher up to roughly $$ 2^{3n/4} $$ 2 3 n / 4 queries. To do so, we develop two new tools: First, we give a probabilistic result that provides improved bound on the joint probability of some special collision events, and second, we present a variant of Patarin’s mirror theory in tweakable permutation settings with a self-contained and concrete proof. Both these results are of generic nature and can be of independent interests. To demonstrate the applicability of these tools, we also prove tight security up to roughly $$ 2^{3n/4} $$ 2 3 n / 4 queries for a variant of DbHtS , called DbHtS-p , that uses two independent universal hash functions.
2020
JOFC
Tightly Secure Hierarchical Identity-Based Encryption
We construct the first tightly secure hierarchical identity-based encryption (HIBE) scheme based on standard assumptions, which solves an open problem from Blazy, Kiltz, and Pan (CRYPTO 2014). At the core of our constructions is a novel randomization technique that enables us to randomize user secret keys for identities with flexible length. The security reductions of previous HIBEs lose at least a factor of Q, which is the number of user secret key queries. Different to that, the security loss of our schemes is only dependent on the security parameter. Our schemes are adaptively secure based on the Matrix Diffie-Hellman assumption, which is a generalization of standard Diffie-Hellman assumptions such as k-Linear. We have two tightly secure constructions, one with constant ciphertext size, and the other with tighter security at the cost of linear ciphertext size. Among other things, our schemes imply the first tightly secure identity-based signature scheme by a variant of the Naor transformation.
2020
JOFC
Topology-Hiding Computation on All Graphs
A distributed computation in which nodes are connected by a partial communication graph is called topology hiding if it does not reveal information about the graph beyond what is revealed by the output of the function. Previous results have shown that topology-hiding computation protocols exist for graphs of constant degree and logarithmic diameter in the number of nodes (Moran–Orlov–Richelson, TCC’15; Hirt et al., Crypto’16) as well as for other graph families, such as cycles, trees, and low circumference graphs (Akavia–Moran, Eurocrypt’17), but the feasibility question for general graphs was open. In this work, we positively resolve the above open problem: we prove that topology-hiding computation is feasible for all graphs under either the decisional Diffie–Hellman or quadratic residuosity assumption. Our techniques employ random or deterministic walks to generate paths covering the graph, upon which we apply the Akavia–Moran topology-hiding broadcast for chain graphs (paths). To prevent topology information revealed by the random walk, we design multiple graph-covering sequences that, together, are locally identical to receiving at each round a message from each neighbor and sending back a processed message from some neighbor (in a randomly permuted order).