Here you can see all recent updates to the IACR webpage. These updates are also available:

7
August
2017

ePrint Report
CryptHOL: Game-based Proofs in Higher-order Logic
David A. Basin, Andreas Lochbihler, S. Reza Sefidgar

Game-based proofs are a well-established paradigm for structuring security arguments and simplifying their understanding. We present a novel framework, CryptHOL, for rigorous game-based proofs that is supported by mechanical theorem proving. CryptHOL is based on a new semantic domain with an associated functional programming language for expressing games. We embed our framework in the Isabelle/HOL theorem prover and, using the theory of relational parametricity, we tailor Isabelle’s existing proof automation to game-based proofs.

By basing our framework on a conservative extension of higher-order logic and providing sufficient automation support, the resulting proofs are trustworthy and comprehensible, and the framework is extensible and widely applicable. We evaluate our framework by formalizing different game-based proofs from the literature and comparing the results with existing formal-methods tools.

By basing our framework on a conservative extension of higher-order logic and providing sufficient automation support, the resulting proofs are trustworthy and comprehensible, and the framework is extensible and widely applicable. We evaluate our framework by formalizing different game-based proofs from the literature and comparing the results with existing formal-methods tools.

ePrint Report
Attribute-Based Group Homomorphic Encryption and Additively Homomorphic IBE
Michael Clear, Ciaran McGoldrick

Group Homomorphic Encryption (GHE), formally defined by Armknecht, Katzenbeisser and Peter, is a public-key encryption primitive where the decryption algorithm is a group homomorphism. Hence it suports homomorphic evaluation of a single algebraic operation such as modular addition or modular multiplication. Most classical homomorphic encryption schemes such as as Goldwasser-Micali and Paillier are instances of GHE. In this work, we extend GHE to the attribute-based setting. We introduce and formally define the notion of Attribute-Based GHE (ABGHE) and explore its properties. Our main result is the construction of an Identity-Based Encryption (IBE) scheme supporting homomorphic addition modulo a poly-sized prime $e$, which is an instance of ABGHE. Our construction builds upon the IBE scheme of Boneh, LaVigne and Sabin (BLS). BLS relies on a hash function that maps identities to $e^{\text{th}}$ residues. However there is no known way to securely instantiate such a function. Our construction extends BLS so that it can use a hash function that can be securely instantiated. We prove our scheme IND-ID-CPA secure under the (slightly modified) $e^{\text{th}}$ residuosity assumption in the random oracle model and show that it supports a (modular) additive homomorphism. By using multiple instances of the scheme with distinct primes and leveraging the Chinese Remainder Theorem, we can support homomorphic addition modulo a ``large'' (i.e. superpolynomial) integer, the first such IBE scheme. We also show that our scheme for $e > 2$ is anonymous assuming the hardness of deciding solvability of a special system of multivariate polynomial equations. Finally, we define a primitive for attribute-based group homomorphisms in the multi-key setting, introduce an important security property and present a generic construction of the primitive meeting this security property.

ePrint Report
Twisting Lattice and Graph Techniques to Compress Transactional Ledgers
Rémi Géraud, David Naccache, Răzvan Roşie

Keeping track of financial transactions (e.g., in banks and blockchains) means keeping track of an ever-increasing list of exchanges between accounts. In fact, many of these transactions can be safely “forgotten”, in the sense that purging a set of them that compensate each other does not impact the network’s semantic meaning (e.g., the accounts’ balances). We call nilcatenation a collection of transactions having no effect on a network’s semantics. Such exchanges may be archived and removed, yielding a smaller, but equivalent ledger. Motivated by the computational and analytic benefits obtained from more compact representations of numerical data, we formalize the problem of finding nilcatenations, and propose detection methods based on graph and lattice-reduction techniques. Atop interesting applications of this work (e.g., decoupling of centralized and distributed databases), we also discuss the original idea of a “community-serving proof of work”: finding nilcatenations constitutes a proof of useful work, as the periodic removal of nilcatenations reduces the transactional graph’s size.

Verifiable random functions are pseudorandom functions producing publicly verifiable proofs for their outputs, allowing for efficient checks of the correctness of their computation. In this work, we introduce a new computational hypothesis, the n-Eigen-Value assumption, which can be seen as a relaxation of the U_n-MDDH assumption, and prove its equivalence with the n-Rank assumption. Based on the newly introduced computational hypothesis, we build the core of a verifiable random function having an exponentially large input space and reaching adaptive security under a static assumption. The final construction achieves shorter public and secret keys compared to the existing schemes reaching the same properties.

ePrint Report
Searchable Encryption with Optimal Locality: Achieving Sublogarithmic Read Efficiency
Ioannis Demertzis, Dimitrios Papadopoulos, Charalampos Papamanthou

We propose the first linear-space searchable encryption scheme with constant locality and sublogarithmic read efficiency, strictly improving the previously best known read efficiency bound (Asharov et al., STOC 2016) from $\Theta(\log N\log \log N)$ to $O(\log N/\log \log N)$, where $N$ is the size of the dataset. Our scheme is size-sensitive, meaning our bound is tight only for keyword lists whose sizes lie within the specific range $(N^{1-1/\log\log N},N/\log^2 N]$---outside this range the read efficiency improves to $O(\log^{2/3} N)$. For our construction we develop two techniques that can be of independent interest: New probability bounds for the offline two-choice allocation problem and a new I/O-efficient oblivious RAM with $o(\sqrt {n})$ bandwidth overhead and zero failure probability.

ePrint Report
Efficient reductions in cyclotomic rings - Application to R-LWE based FHE schemes
Jean-Claude Bajard, Julien Eynard, Anwar Hasan, Paulo Martins, Leonel Sousa, Vincent Zucca

With Fully Homomorphic Encryption (FHE), it is possible to process encrypted data without having an access to the private-key. This has a
wide range of applications, most notably the offloading of
sensitive data processing. Most research on FHE has focused on the
improvement of its efficiency, namely by introducing schemes based on
the Ring-Learning With Errors (R-LWE) problem, and techniques such as batching, which allows for the encryption of
multiple messages in the same ciphertext. Much of the related research has focused on RLWE relying on power-of-two cyclotomic polynomials. While it is possible to achieve efficient arithmetic with such polynomials, one cannot exploit batching. Herein, the efficiency of ring arithmetic underpinned by non-power-of-two cyclomotic polynomials is analysed and improved. Two methods for polynomial reduction are proposed, one based on the Barrett reduction and
the other on a Montgomery representation. Speed-ups up to 2.66 are obtained for the reduction operation using an i7-5960X processor when compared with a straightforward implementation of the Barrett reduction. Moreover, the proposed methods are exploited to enhance homomorphic multiplication of FV and BGV encryption schemes, producing experimental speed-ups up to 1.37.

ePrint Report
sLiSCP: Simeck-based Permutations for Lightweight Sponge Cryptographic Primitives
Riham AlTawy, Raghvendra Rohit, Morgan He, Kalikinkar Mandal, Gangqiang Yang, Guang Gong

In this paper, we propose a family of lightweight cryptographic permutations called sLiSCP, with the sole aim to provide a realistic minimal design}that suits a variety of lightweight device applications. More precisely, we argue that for such devices the chip area dedicated for security purposes should, not only be consumed by an encryption or hashing algorithm, but also provide as many cryptographic functionalities as possible. Our main contribution is the design of a lightweight permutation employing a 4-subblock Type-2 Generalized-like Structure (GFS) and round-reduced unkeyed Simeck with either 48 or 64-bit block length as the two round functions, thus resulting in two lightweight instances of the permutation, sLiSCP-192 and sLiSCP-256. We leverage the extensive security analysis on both Simeck (Simon-like functions) and Type-2 GFSs and present bounds against differential and linear cryptanalysis. In particular, we provide an estimation on the maximum differential probability of the round-reduced Simeck and use it for bounding the maximum expected differential/linear characteristic probability for our permutation. Due to the iterated nature of the Simeck round function and the simple XOR and cyclic shift mixing layer of the GFS that fosters the propagation of long trails, the long trail strategy}is adopted to provide tighter bounds on both characteristics. Moreover, we analyze sLiSCP against a wide range of distinguishing attacks, and accordingly, claim that there exists no structural distinguishers for sLiSCP with a complexity below $2^{b/2}$ where $b$ is the state size. We demonstrate how sLiSCP can be used as a unified round function in the duplex sponge construction to build (authenticated) encryption and hashing functionalities.
The parallel hardware implementation area of the unified duplex mode of sLiSCP-192 (resp. sLiSCP-256) in CMOS $65\,nm$ ASIC is 2289 (resp. 3039) GEs with a throughput of 29.62 (resp. 44.44) kbps, and their areas in CMOS $130\, nm$ are 2498 (resp. 3319) GEs.

ePrint Report
On the Tightness of Forward-Secure Signature Reductions
Michel Abdalla, Fabrice Benhamouda, David Pointcheval

In this paper, we revisit the security of factoring-based signature schemes built via the Fiat-Shamir transform and show that they can admit tighter reductions to certain decisional complexity assumptions such as the quadratic-residuosity, the high-residuosity, and the $\phi$-hiding assumptions. We do so by proving that the underlying identification schemes used in these schemes are a particular case of the lossy identification notion recently introduced by Abdalla et al. at Eurocrypt 2012. Next, we show how to extend these results to the forward-security setting based on ideas from the Itkis-Reyzin forward-secure signature scheme. Unlike the original Itkis-Reyzin scheme, our construction can be instantiated under different decisional complexity assumptions and has a much tighter security reduction. Moreover, we also show that the tighter security reductions provided by our proof methodology can result in concrete efficiency gains in practice, both in the standard and forward-security setting, as long as the use of stronger security assumptions is deemed acceptable. Finally, we investigate the design of forward-secure signature schemes whose security reductions are fully tight.

In their foundational paper on pseudorandom bit generation, Blum and Micali showed that the discrete logarithm problem could be solved efficiently given a ``magic box'' oracle that computes the most significant bit of the discrete logarithm with a slight advantage over guessing. This magic box can be realized on a quantum computer with a new, simplified variant of Shor's algorithm. The resulting combination of Blum and Micali's reduction and this new quantum magic box offers an intriguing hybrid approach to solving the discrete logarithm problem with a quantum computer. Because the only requirement on the quantum portion of the algorithm is that it provide an approximate estimate of a single bit of the discrete logarithm, the new algorithm may be easier to implement, more resilient to errors, and more amenable to optimization than previous approaches. Further analysis is needed to quantify the extent of these benefits in practice. The result applies to the discrete logarithm problem over both finite fields and elliptic curves. (The views expressed are my own and do not necessarily reflect those of my employer.)

ePrint Report
Binary Hash Tree based Certificate Access Management
Virendra Kumar, Jonathan Petit, William Whyte

We present a certificate access management system to support the USDOT's proposed rule on Vehicle-to-Vehicle (V2V) communications, Federal Motor Vehicle Safety Standard (FMVSS) No.~150. Our proposal, which we call Binary Hash Tree based Certificate Access Management (BCAM) eliminates the need for vehicles to have bidirectional connectivity with the Security Credential Management System (SCMS) for certificate update. BCAM significantly improves the ability of the SCMS to manage large-scale software and/or hardware compromise events. Vehicles are provisioned at the start of their lifetime with all the certificates they will need. However, certificates and corresponding private key reconstruction values are provided to the vehicle encrypted, and the keys to decrypt them are only made available to the vehicles shortly before the start of the validity periods of those certificates. Vehicles that are compromised can be effectively removed from the V2V system by preventing them from decrypting the certificates. We demonstrate that the system is feasible with a broadcast channel for decryption keys and other revocation information, even if that channel has a relatively low capacity.

Bernstein et al. have proposed a new permutation, Gimli, which aims to provide simple and performant implementations on a wide variety of platforms. One of the tricks used to make Gimli performant is that it processes data mostly in 96-bit columns, only occasionally swapping 32-bit words between them.

Here we show that this trick is dangerous by presenting a distinguisher for reduced-round Gimli. Our distinguisher takes the form of an attack on a simple and practical PRF that should be nearly 192-bit secure. Gimli has 24 rounds. Against 15.5 of those rounds, our distinguisher uses two known plaintexts, takes about $2^{64}$ time and uses enough memory for a set with $2^{64}$ elements. Against 19$\frac12$ rounds, the same attack uses three non-adaptively chosen plaintexts, and uses twice as much memory and about $2^{128}$ time. Against $22\frac12$ rounds, it requires about $2^{138.5}$ work, $2^{129}$ bits of memory and $2^{10.5}$ non-adaptively chosen plaintexts. The same attack would apply to 23$\frac12$ rounds if Gimli had more rounds.

Our attack does not use the structure of the SP-box at all, other than that it is invertible, so there may be room for improvement. On the bright side, our toy PRF puts keys and data in different positions than a typical sponge mode would do, so the attack might not work against sponge constructions.

Here we show that this trick is dangerous by presenting a distinguisher for reduced-round Gimli. Our distinguisher takes the form of an attack on a simple and practical PRF that should be nearly 192-bit secure. Gimli has 24 rounds. Against 15.5 of those rounds, our distinguisher uses two known plaintexts, takes about $2^{64}$ time and uses enough memory for a set with $2^{64}$ elements. Against 19$\frac12$ rounds, the same attack uses three non-adaptively chosen plaintexts, and uses twice as much memory and about $2^{128}$ time. Against $22\frac12$ rounds, it requires about $2^{138.5}$ work, $2^{129}$ bits of memory and $2^{10.5}$ non-adaptively chosen plaintexts. The same attack would apply to 23$\frac12$ rounds if Gimli had more rounds.

Our attack does not use the structure of the SP-box at all, other than that it is invertible, so there may be room for improvement. On the bright side, our toy PRF puts keys and data in different positions than a typical sponge mode would do, so the attack might not work against sponge constructions.

As an invited speaker of the ACISP 2017 conference, Dongxi Liu recently
introduced a new lattice-based encryption scheme (joint work with Li, Kim
and Nepal) designed for lightweight IoT applications, and announced plans
to submit it to the NIST postquantum competition. The new scheme is based
on a variant of standard LWE called Compact-LWE, but is claimed to
achieve high security levels in considerably smaller dimensions than
usual lattice-based schemes. In fact, the proposed parameters, allegedly
suitable for 138-bit security, involve the Compact-LWE assumption in
dimension only 13.

In this note, we show that this particularly aggressive choice of parameters fails to achieve the stated security level. More precisely, we show that ciphertexts in the new encryption scheme can be decrypted using the public key alone with >99.9% probability in a fraction of a second on a standard PC, which is not quite as fast as legitimate decryption, but not too far off.

In this note, we show that this particularly aggressive choice of parameters fails to achieve the stated security level. More precisely, we show that ciphertexts in the new encryption scheme can be decrypted using the public key alone with >99.9% probability in a fraction of a second on a standard PC, which is not quite as fast as legitimate decryption, but not too far off.

ePrint Report
Dynamic Searchable Public-Key Ciphertexts with Fast Performance and Practical Security
Peng Xu, Xia Gao, Wei Wang, Willy Susilo, Qianhong Wu, Hai Jin

Public-key encryption with keyword search (PEKS) allows a sender to generate keyword-searchable ciphertexts using
a receiver’s public key and upload them to a server. Upon receiving a keyword-search trapdoor from the receiver, the
server finds all matching ciphertexts. Due to the characteristics of public-key encryption, PEKS is inherently suitable
for the application of numerous senders. Hence, PEKS is a well-known method to achieve secure keyword search over
the encrypted email system. However, we find that without a keyword-search trapdoor, the traditional concept of PEKS
still allows the server to have the obvious advantage to distinguish ciphertexts in practice. In other words, the traditional
PEKS cannot guarantee the well-recognized semantic security in practice. To solve this problem, this paper defines a new
concept called dynamic searchable public-key encryption (DSPE). It can hide the relationships between keyword-searchable
ciphertexts and their corresponding encrypted files, and guarantee semantic security in both theory and practice. In addition,
it allows the server to delete the intended ciphertexts according to the receiver’s requirement. Then, we construct a DSPE
instance with provable semantic security in the random oracle model. In terms of performance, the proposed instance also
has the advantage that it only requires sublinear complexity to determine all matching ciphertexts or to delete the intended
ciphertexts. Finally, we experimentally demonstrate the practicability of the instance.

ePrint Report
Convolutional Neural Networks with Data Augmentation against Jitter-Based Countermeasures -- Profiling Attacks without Pre-Processing --
Eleonora Cagli, C\'ecile Dumas, Emmanuel Prouff

In the context of the security evaluation of cryptographic implementations, profiling attacks (aka Template Attacks) play a fundamental role. Nowadays the most popular Template Attack strategy consists in approximating the information leakages by Gaussian distributions. Nevertheless this approach suffers from the difficulty to deal with both
the traces misalignment and the high dimensionality of the data. This forces the attacker to perform critical preprocessing phases, such as the selection of the points of interest and the realignment of measurements. Some software and hardware countermeasures have been conceived exactly to create such a misalignment. In this paper we propose an end-to-end profiling attack strategy based on the Convolutional Neural Networks: this strategy greatly facilitates the attack roadmap, since it does not require a previous trace realignment nor a precise selection of points of interest. To significantly increase the performances of the CNN, we moreover propose to equip it with the data augmentation technique that is classical in other applications of Machine Learning. As a validation, we present several experiments against traces misaligned by different kinds of countermeasures, including the augmentation of the clock jitter effect in a secure hardware implementation over a modern chip. The excellent results achieved in these experiments prove that Convolutional Neural Networks approach combined with data augmentation gives a very efficient alternative to the state-of-the-art profiling attacks.

1
August
2017

ePrint Report
Secure Storage with Replication and Transparent Deduplication
Iraklis Leontiadis, Reza Curtmola

We seek to answer the following question: To what extent can we deduplicate replicated storage? To answer this question, we design ReDup, a secure storage system that provides users with strong integrity, reliability, and transparency guarantees about data that is outsourced at cloud storage providers. Users store multiple replicas of their data at different storage servers, and the data at each storage server is deduplicated across users. Remote data integrity mechanisms are used to check the integrity of replicas. We consider a strong adversarial model, in which collusions are allowed between storage servers and also between storage servers and dishonest users of the system. A cloud storage provider (CSP) could store less replicas than agreed upon by contract, unbeknownst to honest users. ReDup defends against such adversaries by making replica generation to be time consuming so that a dishonest CSP cannot generate replicas on the fly when challenged by the users.

In addition, ReDup employs transparent deduplication, which means that users get a proof attesting the deduplication level used for their files at each replica server, and thus are able to benefit from the storage savings provided by deduplication. The proof is obtained by aggregating individual proofs from replica servers, and has a constant size regardless of the number of replica servers. Our solution scales better than state of the art and is provably secure under standard assumptions.

In addition, ReDup employs transparent deduplication, which means that users get a proof attesting the deduplication level used for their files at each replica server, and thus are able to benefit from the storage savings provided by deduplication. The proof is obtained by aggregating individual proofs from replica servers, and has a constant size regardless of the number of replica servers. Our solution scales better than state of the art and is provably secure under standard assumptions.

ePrint Report
Private Intersection-Sum Protocol with Applications to Attributing Aggregate Ad Conversions
Mihaela Ion, Ben Kreuter, Erhan Nergiz, Sarvar Patel, Shobhit Saxena, Karn Seth, David Shanahan, Moti Yung

In this work, we consider the Intersection-Sum problem: two parties hold datasets containing user identifiers, and the second party additionally has an integer value associated with each user identifier. The parties want to learn the number of users they have in common, and the sum of the associated integer values, but “nothing more”. We present a novel protocol tackling this problem using Diffie-Hellman style Private Set Intersection techniques together with Paillier homomorphic encryption. We prove security of our protocol in the honest-but-curious model. We also discuss applications for the protocol for attributing aggregate ad conversions. Finally, we present a variant of the protocol, which allows aborting if the intersection is too small, in which case neither party learns the intersection-sum.

ePrint Report
SecReach: Secure Reachability Computation on Encrypted Location Check-in Data
Hanyu Quan, Boyang Wang, Iraklis Leontiadis, Ming Li, Yuqing Zhang

Reachability, which answers whether one person is reachable
from another through a sequence of contacts within a period of time, is
of great importance in many domains such as social behavior analysis.
Recently, with the prevalence of various location-based services (LBSs),
a great amount of spatiotemporal location check-in data is generated by
individual GPS-equipped mobile devices and collected by LBS companies,
which stimulates research on reachability queries in these location
check-in datasets. Meanwhile, a growing trend is for LBS companies to
use scalable and cost-effective clouds to collect, store, and analyze data,
which makes it necessary to encrypt location check-in data before outsourcing
due to privacy concerns. In this paper, for the first time, we
propose a scheme, SecReach, to securely evaluate reachability queries
on encrypted location check-in data by using somewhat homomorphic
encryption (SWHE). We prove that our scheme is secure against a semihonest
cloud server. We also present a proof-of-concept implementation
using the state-of-the-art SWHE library (i.e., HElib), which shows the
efficiency and feasibility of our scheme.

Intel SGX enclaves provide hardware enforced confidentiality and integrity guarantees for running pure computations (\ie, OS-level side-effect-free code) in the cloud environment. In addition, SGX remote attestation enables enclaves to prove that a claimed enclave is indeed running inside a genuine SGX hardware and not some (adversary controlled) SGX simulator.

Since cryptographic protocols do not compose well, especially when run concurrently, SGX remote attestation is only a necessary pre-condition for securely instantiating an enclave. In practice, one needs to analyze all the different interacting enclaves as a \textit{single protocol} and make sure that no sub-computation of the protocol can be simulated outside of the enclave. In this paper we describe protocol design problems under (a) sequential-composition, (b) concurrent-composition, and (c) enclave state malleability that must be taken into account while designing new enclaves. We analyze Intel provided EPID \textsf{Provisioning} and \textsf{Quoting} enclave and report our (largely positive) findings. We also provide details about how SGX uses EPID Group Signatures and report (largely negative) results about claimed anonymity guarantees.

Since cryptographic protocols do not compose well, especially when run concurrently, SGX remote attestation is only a necessary pre-condition for securely instantiating an enclave. In practice, one needs to analyze all the different interacting enclaves as a \textit{single protocol} and make sure that no sub-computation of the protocol can be simulated outside of the enclave. In this paper we describe protocol design problems under (a) sequential-composition, (b) concurrent-composition, and (c) enclave state malleability that must be taken into account while designing new enclaves. We analyze Intel provided EPID \textsf{Provisioning} and \textsf{Quoting} enclave and report our (largely positive) findings. We also provide details about how SGX uses EPID Group Signatures and report (largely negative) results about claimed anonymity guarantees.

ePrint Report
Faster Bootstrapping with Multiple Addends
TanPing ZHOU, XiaoYuan YANG, LongFei LIU, Wei ZHANG, YiTao DING

As an important cryptographic primitive in cloud computing and outsourced computation, fully homomorphic encryption (FHE) is an animated area of modern cryptography. However, the efficiency of FHE has been a bottleneck that impeding its application. According to Gentry’s blueprint, bootstrapping, which is used to decrease ciphertext errors, is the most important process in FHE. However, bootstrapping is also the most expensive process that affecting the efficiency of the whole system. Firstly, we notice that, hundreds of serial homomorphic additions take most of the time of bootstrapping. We made use of the properties of Boolean circuit to reduce the number of serial homomorphic additions by one third, and thus constructed an efficient FHE scheme with boot-strapping in 12ms. Secondly, the most expensive parts in our bootstrapping, EHCM and serial homomorphic additions, can be accelerated by parallel. This parallel may accelerate the bootstrapping. At last, we found a set of more efficient combination of parameters for our scheme. As a result, our security parameter level is 128 bits and the correctness is elevated, compared with TFHE scheme in ASIACRYPT 2016. Experiments show that the running time of our bootstrapping is 12ms, which is only 23 percent of TFHE, and is less than CGGI17.

Non-malleable commitments are a central cryptographic primitive that guarantee security against man-in-the-middle adversaries, and their exact round complexity has been a subject of great interest. Pass (TCC 2013, CC 2016) proved that non-malleable commitments with respect to commitment are impossible to construct in less than three rounds, via black-box reductions to polynomial hardness assumptions. Obtaining a matching positive result has remained an open problem so far.

While three-round constructions of non-malleable commitments have been achieved, beginning with the work of Goyal, Pandey and Richelson (STOC 2016), current constructions require super-polynomial assumptions.

In this work, we settle the question of whether three-round non-malleable commitments can be based on polynomial hardness assumptions. We give constructions based on polynomial hardness of Decisional Diffie-Hellman assumption or Quadratic Residuosity or Nth Residuosity, together with ZAPs. Our protocols also satisfy concurrent non-malleability.

While three-round constructions of non-malleable commitments have been achieved, beginning with the work of Goyal, Pandey and Richelson (STOC 2016), current constructions require super-polynomial assumptions.

In this work, we settle the question of whether three-round non-malleable commitments can be based on polynomial hardness assumptions. We give constructions based on polynomial hardness of Decisional Diffie-Hellman assumption or Quadratic Residuosity or Nth Residuosity, together with ZAPs. Our protocols also satisfy concurrent non-malleability.

ePrint Report
Decoding Generalized Reed-Solomon Codes and Its Application to RLCE Encryption Scheme
Yongge Wang

This paper compares the efficiency of various algorithms
for implementing public key encryption scheme RLCE
on 64-bit CPUs. By optimizing various algorithms for polynomial
and matrix operations over finite fields, we obtained several
interesting (or even surprising) results.
For example, it is well known (e.g., Moenck 1976) that
Karatsuba's algorithm outperforms classical polynomial multiplication
algorithm from the degree 15 and above (practically, Karatsuba's algorithm only
outperforms classical polynomial multiplication
algorithm from the degree 35 and above ). Our experiments
show that 64-bit optimized Karatsuba's algorithm will only
outperform 64-bit optimized classical polynomial multiplication
algorithm for polynomials of degree 115 and above over finite field
$GF(2^{10})$. The second interesting (surprising) result shows
that 64-bit optimized Chien's search algorithm ourperforms all other 64-bit optimized
polynomial root finding algorithms such as BTA and FFT for polynomials
of all degrees over finite field $GF(2^{10})$. The third interesting (surprising)
result shows that 64-bit optimized Strassen matrix multiplication algorithm
only outperforms 64-bit optimized classical matrix multiplication algorithm
for matrices of dimension 750 and above over finite field
$GF(2^{10})$. It should be noted that existing literatures
and practices recommend Strassen matrix multiplication
algorithm for matrices of dimension 40 and above.
All experiments are done on a 64-bit MacBook Pro with i7 CPU with a single thread.
The reported results should be
appliable to 64 or larger bits CPU. For 32 or smaller bits CPUs, these results may not
be applicable. The source code
and library for the algorithms covered in this paper will be available at http://quantumca.org/.

Ridge regression is an algorithm that takes as input a large number of data points and finds the best-fit linear curve through these points. It is a building block for many machine-learning operations. This report presents a system for privacy-preserving ridge regression. The system outputs the best-fit curve in the clear, but exposes no other information about the input data.

This problem was elegantly addressed by Nikolaenko et al. (S\&P 2013). They suggest an approach that combines homomorphic encryption and Yao garbled circuits. The solution presented in this report only involves homomorphic encryption. This improves the performance as Yao circuits were the main bottleneck in the previous solution.

This problem was elegantly addressed by Nikolaenko et al. (S\&P 2013). They suggest an approach that combines homomorphic encryption and Yao garbled circuits. The solution presented in this report only involves homomorphic encryption. This improves the performance as Yao circuits were the main bottleneck in the previous solution.

ePrint Report
Revisiting Difficulty Control for Blockchain Systems
Dmitry Meshkov, Alexander Chepurnoy, Marc Jansen

The Bitcoin whitepaper states that security of the system is guaranteed as long as honest miners control more than half of the current total computational power. The whitepaper assumes a static difficulty, thus it is equally hard to solve a cryptographic proof-of-work puzzle for any given moment of the system history. However, the real Bitcoin network is using an adaptive difficulty adjustment mechanism.

In this paper we introduce and analyze a new kind of attack on a mining difficulty retargeting function used in Bitcoin. A malicious miner is increasing his mining profits from the attack, named coin-hopping attack, and, as a side effect, an average delay between blocks is increasing.

We propose an alternative difficulty adjustment algorithm in order to reduce an incentive to perform coin-hopping, and also to improve stability of inter-block delays. Finally, we evaluate the presented approach and show that the novel algorithm performs better than the original algorithm of Bitcoin.

In this paper we introduce and analyze a new kind of attack on a mining difficulty retargeting function used in Bitcoin. A malicious miner is increasing his mining profits from the attack, named coin-hopping attack, and, as a side effect, an average delay between blocks is increasing.

We propose an alternative difficulty adjustment algorithm in order to reduce an incentive to perform coin-hopping, and also to improve stability of inter-block delays. Finally, we evaluate the presented approach and show that the novel algorithm performs better than the original algorithm of Bitcoin.

31
July
2017

The LLL algorithm (from Lenstra, Lenstra and Lov\'{a}sz) and its generalization BKZ (from Schnorr and Euchner) are widely used in cryptanalysis, especially for lattice-based cryptography.
Precisely understanding their behavior is crucial for deriving appropriate key-size for cryptographic schemes subject to lattice-reduction attacks. Current models, e.g. the Geometric Series Assumption and Chen-Nguyen's BKZ-simulator, have provided a decent first-order analysis of the behavior of LLL and BKZ. However, they only focused on the average behavior and were not perfectly accurate. In this work, we initiate a second order analysis of this behavior. We confirm and quantify discrepancies between models and experiments ---in particular in the head and tail regions--- and study their consequences. We also provide variations around the mean and correlations statistics, and study their impact. While mostly based on experiments, by pointing at and quantifying unaccounted phenomena, our study sets the ground for a theoretical and predictive understanding of LLL and BKZ performances at the second order.

ePrint Report
Success Probability of Multiple/Multidimensional Linear Cryptanalysis Under General Key Randomisation Hypotheses
Subhabrata Samajder, Palash Sarkar

This work considers statistical analysis of attacks on block ciphers using several linear approximations. A general and unified
approach is adopted. To this end, the general key randomisation hypotheses for multidimensional and multiple linear cryptanalysis
are introduced. Expressions for the success probability in terms of the data complexity and the advantage are obtained using the
general key randomisation hypotheses for both multidimensional and multiple linear cryptanalysis and under the settings where the
plaintexts are sampled with or without replacement. Particularising to standard/adjusted key randomisation hypotheses gives rise
to success probabilities in 16 different cases out of which in only five cases expressions for success probabilities have been previously
reported. Even in these five cases, the expressions for success probabilities that we obtain are more general than what was previously
obtained. A crucial step
in the analysis is the derivation of the distributions of the underlying test statistics. While we carry out the analysis formally
to the extent possible, there are certain inherently heuristic assumptions that need to be made. In contrast to previous works which
have implicitly made such assumptions, we carefully highlight these and discuss why they are unavoidable. Finally, we provide a complete
characterisation of the dependence of the success probability on the data complexity.

ePrint Report
Evidence-Based Trust Mechanism Using Clustering Algorithms for Distributed Storage Systems
Giulia Traverso, Carlos Garcia Cordero, Mehrdad Nojoumian, Reza Azarderakhsh, Denise Demirel, Sheikh Mahbub Habib, Johannes Buchmann

In distributed storage systems, documents are shared among multiple Cloud providers and stored within their respective storage servers. In social secret sharing-based distributed storage systems, shares of the documents are allocated according to the trustworthiness of the storage servers. This paper proposes a trust mechanism using machine learning techniques to compute evidence-based trust values. Our mechanism mitigates the effect of colluding storage servers. More precisely, it becomes possible to detect unreliable evidence and establish countermeasures in order to discourage the collusion of storage servers. Furthermore, this trust mechanism is applied to the social secret sharing protocol AS$^3$, showing that this new evidence-based trust mechanism enhances the protection of the stored documents.

The Number Theoretic Transform (NTT) is the time critical function required by cryptographic protocols based on the Ring Learning With Errors problem (RLWE),a popular choice for post-quantum cryptography.
Here we apply a simple methodology to convert the NTT and its inverse from a mathematically correct (but side-channel vulnerable) description, to an efficient constant-time side-channel resistant version.

ePrint Report
Proposal of primitive polynomials for Linux kernel PRNG
David FONTAINE AND Olivier VIVOLO

The polynomials defining the LFSRs of the linux Kernel PRNG are irreducible but not primitive. As a result, the space of numbers generated by these LFSRs does not fill all the space. We propose in this paper more optimal polynomials which increase by a factor of 3 the space of the random numbers generated by these LFSRs. The polynomials used in the current implementation of the PRNG and the point presented here, do not conclude a practical attack on the PRNG.

ePrint Report
AS$^3$: Adaptive Social Secret Sharing for Distributed Storage Systems
Giulia Traverso, Denise Demirel, Sheikh Mahbub Habib, Johannes Buchmann

Distributed storage allows to outsource a document to the cloud such that multiple users can easily access the file. The protection of the document stored relies on secret sharing, which generates and distributes shares of the document to the storage servers. However, the users have to trust that a certain amount of storage servers behaves honestly and do not lose (retrievability) or reveal (confidentiality) the document. To address this so called social secret sharing schemes were developed that allow to adjust the distribution of shares according to the experience made with the involved storage servers. In this work, we provide a framework called AS$^3$ that allows to build social secret sharing schemes based on dynamic secret sharing. The resulting protocol has more freedom in adjusting the parameters of the shares distribution and therefore leads to more efficient and accurate solutions as well as an optimal storage consumption. Furthermore, we provide measures to detect and to prevent that the document is lost or accidentally revealed to individual storage servers. We also demonstrate how to compute trust values for storage servers, how to initialize trust values for newcomers, and provide a proof of concept implementation.

ePrint Report
Dynamic and Verifiable Hierarchical Secret Sharing
Giulia Traverso, Denise Demirel, Johannes Buchmann

In this work we provide a framework for dynamic secret sharing and present the first dynamic and verifiable hierarchical secret sharing scheme based on Birkhoff interpolation. Since the scheme is dynamic it allows, without reconstructing the message distributed, to add and remove shareholders, to renew shares, and to modify the conditions for accessing the message. Furthermore, each shareholder can verify its share received during these algorithms protecting itself against malicious dealers and shareholders. While these algorithms were already available for classical Lagrange interpolation based secret sharing, corresponding techniques for Birkhoff interpolation based schemes were missing. Note that Birkhoff interpolation is currently the only technique available that allows to construct hierarchical secret sharing schemes that are efficient and allow to provide shares of equal size for all shareholder in the hierarchy. Thus, our scheme is an important contribution to hierarchical secret sharing.