IACR News
Updates on the COVID-19 situation are on the
Announcement channel.
Here you can see all recent updates to the IACR webpage. These updates are also available:
25 January 2016
Yohei Watanabe, Goichiro Hanaoka, Junji Shikata
Data stored in cloud storage sometimes requires long-term security due to its sensitivity (e.g., genome data), and therefore, it also requires flexible access control for handling entities who can use the data. Broadcast encryption can partially provide such flexibility by specifying privileged receivers so that only they can decrypt a ciphertext. However, once privileged receivers are specified, they can be no longer dynamically added and/or removed. In this paper, we propose a new type of broadcast encryption which provides long-term security and appropriate access control, which we call unconditionally secure revocable-storage broadcast encryption (RS-BE). In RS-BE, privileged receivers of a ciphertext can be dynamically updated without revealing any information on the underlying plaintext. Specifically, we define a model and security of RS-BE, derive tight lower bounds on sizes of secret keys required for secure RS-BE, and propose a construction of RS-BE which meets all of these bounds. Our lower bounds can be applied to traditional broadcast encryption. Furthermore, to detect an improper update, we consider security against modification attacks to a ciphertext, and present a concrete construction secure against this type of attacks.
Remi Bricout, Sean Murphy, Kenneth G. Paterson, Thyla van der Merwe
We explore the use of the Mantin biases (Mantin, Eurocrypt 2005) to recover plaintexts from RC4-encrypted traffic. We provide a more fine-grained analysis of these biases than in Mantin's original work. We show that, in fact, the original analysis was incorrect in certain cases: the Mantin biases are sometimes non-existent, and sometimes stronger than originally predicted. We then show how to use these biases in a plaintext recovery attack. Our attack targets two unknown bytes of plaintext that are located close to sequences of known plaintext bytes, a situation that arises in practice when RC4 is used in, for example, TLS. We provide a statistical framework that enables us to make predictions about the performance of this attack and its variants. We then extend the attack using standard dynamic programming techniques to tackle the problem of recovering longer plaintexts, a setting of practical interest in recovering HTTP session cookies and user passwords that are protected by RC4 in TLS. We perform experiments showing that we can successfully recover 16-byte plaintexts with 80% success rate using $2^{31}$ ciphertexts, an improvement over previous attacks.
Raphael Bost, Pierre-Alain Fouque, David Pointcheval
Symmetric Searchable Encryption (SSE) is a very efficient and practical way for data owners to out- source storage of a database to a server while providing privacy guarantees. Such SSE schemes enable clients to encrypt their database while still performing queries for retrieving documents matching some keyword. This functionality is interesting to secure cloud storage, and efficient schemes have been de- signed in the past. However, security against malicious servers has been overlooked in most previous constructions and these only addressed security against honest-but-curious servers.
In this paper, we study and design the first efficient SSE schemes provably secure against mali- cious servers. First, we give lower bounds on the complexity of such verifiable SSE schemes. Then, we construct generic solutions matching these bounds using efficient verifiable data structures. Finally, we modify an existing SSE scheme that also provides forward secrecy of search queries, and make it prov- ably secure against active adversaries, without increasing the computational complexity of the original scheme.
Christina Garman, Matthew Green, Ian Miers
Decentralized ledger-based currencies such as Bitcoin provide a means to construct payment systems without requiring a trusted bank. Removing this trust assumption comes at the significant cost of transaction privacy. A number of academic works have sought to improve the privacy offered by ledger-based currencies using anonymous electronic cash (e-cash) techniques. Unfortunately, this strong degree of privacy creates new regulatory concerns, since the new private transactions cannot be subject to the same controls used to prevent individuals from conducting illegal transactions such as money laundering. We propose an initial approach to addressing this issue by adding privacy preserving policy-enforcement mechanisms that guarantee regulatory compliance, allow selective user tracing, and admit tracing of tainted coins (e.g., ransom payments). To accomplish this new functionality we also provide improved definitions for Zerocash and, of independent interest, an efficient construction for simulation sound zk-SNARKs.
Amir Herzberg nd Yehonatan Kfir
Abstract. We present a topology-based key setup protocol (ToBKeS)
to facilitate the plug and play deployment of cryptography, in networks
with known topology. This protocol uses the topology to authenticate
messages of devices.
ToBKeS assumes that there is at least one device that is initialized with
the known network topology, the authentication server, that it has a
known public key and that it shares secret keys with some of the other
devices in the network.
ToBKeS eases the adoption of security by eliminating the need to manually
set every device with its own private key. Furthermore, ToBKeS
limits the impact of key exposures by ensuring both perfect forward secrecy
and proactive key refresh, re-establishing security after exposure.
We analyze the properties of the ToBKeS protocol and show sufficient
topology conditions for its applicability. In addition, we prove its security
against power-full attacker, that is able to control the route of the
network, as well as an attacker that is able control some of the devices
in the network.
Muhammad Nadeem
Recently, the problem of quantum position-verification has been extensively analyzed in the formal notion but all existing ceremonial single-round position-verification schemes are insecure. We propose here a different notion for position-verification where distant verifiers determine the actions of the prover through quantum non-local correlations generated by local measurements at the provers site: instead of sending challenge encoded over flying qubits, one of the verifiers teleports the challenge to the prover while prover is required to perform single qubit measurements as well as Bell state measurements and return the outcomes. It allows controlling the provers actions and bound him/her to receive challenge from one of the verifiers, measure in known basis, teleport to another verifier, and return the measurement outcomes to all verifiers simultaneously. Here, no-signaling principle assures that any group of dishonest provers, not at the position to be verified, cannot simulate their actions with the prover who is supposed to be at the specified position. The scheme enables verifiers to trace the origin of received information and hence identify dishonest provers with very high probability, greater than 1-1/2*n, where n is the number of entangled pairs used.
Dimitrios Poulakis
We prove that a system of linear congruences of a particular form has
at most a unique solution below a certain bound which can be computed efficiently. Using this result we develop
attacks against the DSA schemes which, under some assumptions, can provide the secret key
in the case where one or several signed messages are available.
Durga Prasad Sahoo, Phuong Ha Nguyen, Rajat Subhra Chakraborty, Debdeep Mukhopadhyay
This paper introduces the notion of Architectural Bias, which
can be used to measure the influence of the architecture of Arbiter Physically Unclonable Functions (APUFs) on the quality of its outputs. A PUF design with less architectural bias is better than one which has more architectural bias. Architectural bias is the bias in the challenge-response behavior of a PUF due to the architectural features of the design itself, independent of the implementation platform, technology node and external factors.
This bias is different from the bias observed in the APUF outputs
when implemented on Field Programmable Gate Array (FPGA) platform. This platform induced bias is called as Implementation Bias.
To overcome the effect of implementation bias in classic APUF,
Programmable Delay Line APUF (PAPUF) and Double APUF (DAPUF)
have been proposed as alternatives for APUF on FPGA platforms. In this work, we provide a comparative analysis of the architectures of APUF and its two design variants based on the derived linear additive delay models. Subsequently, these designs are evaluated with the architectural bias to quantify the number of good (i.e. usable) PUF instances that it can generate. We also develop a scheme to perform an instance-level comparison of a pair of randomly selected PUF instances of two different PUF designs.
In addition, we study the impacts of architectural bias on PUF performance metrics namely uniformity, uniqueness and reliability. We validate our theoretical findings with simulation and FPGA-based implementation results. Experimental results reveal
that the classic APUF has the least architectural bias, followed by the DAPUF and the PAPUF, respectively.
Ethan Heilman, Foteini Baldimtsi, Sharon Goldberg
Although Bitcoin is often perceived to be an anonymous currency, research has shown that a user's Bitcoin transactions can be linked to compromise the user's anonymity. We present solutions to the anonymity problem for both transactions on Bitcoin's blockchain and off the blockchain (in so called micropayment channel networks). We use an untrusted third party to issue anonymous vouchers which users redeem for Bitcoin. Blind signatures and Bitcoin transaction contracts (aka smart contracts) ensure the anonymity and fairness during the bitcoin $\leftrightarrow$ voucher exchange. Our schemes are practical, secure and anonymous.
Aanchal Malhotra, Sharon Goldberg
We identify two attacks on the Network Time Protocol (NTP)'s cryptographically-authenticated broadcast mode. First, we present a replay attack that allows an on-path attacker to indefinitely stick a broadcast client to a specific time. Second, we present a denial-of-service (DoS) attack that allows an off-path attacker to prevent a broadcast client from ever updating its system clock; to do this, the attacker sends the client a single malformed broadcast packet per query interval. Our DoS attack also applies to all other NTP modes that are `ephemeral' or `preemptable' (including manycast, pool, etc). We then use network measurements to give evidence that NTP's broadcast and other ephemeral/preemptable modes are being used in the wild. We conclude by discussing why NTP's current implementation of symmetric-key cryptographic authentication does not provide security in broadcast mode, and make some recommendations to improve the current state of affairs.
Masahiro Yagisawa
In this paper I propose the new fully homomorphic public-key encryption scheme without bootstrapping that is based on the discrete logarithm assumption and computational DiffieHellman assumption of multivariate polynomials on octonion ring. The key size of this scheme and complexity for enciphering /deciphering become to be not so large to handle.
23 January 2016
The latest issue of the Journal of Cryptology has been published. IACR members can access the issue for free using this page.
22 January 2016
Leuven, Belgium, 14 July - 15 July 2016
Event date: 14 July to 15 July 2016
Taipei, Taiwan, 21 January - 17 April 2016
Event date: 21 January to 17 April 2016
Submission deadline: 3 April 2016
Notification: 30 May 2016
Submission deadline: 3 April 2016
Notification: 30 May 2016
itrust consulting sàrl
For the recently started EU funded project bIoTope and for assisting our customers on risk management and security audit related to EDAS, PKI, e-money, microbilling, itrust consulting would like to hire a Security expert and cryptographer for IoT.
Requirements:
Requirements:
- Theoretical knowledge in cryptography, e-money, microbilling and security protocols
- Working experience in the area of open data, Internet of Things, big data.
- Knowledge on data protocol, data formats such as O-MI (Open Messaging Interface) and O-DF (Open Data Format)
- Experience in development and on project management methods such as SCRUM, PRINCE2.
- Fluency in English, French (Luxembourgish could be considered as an advantage);
- Hold a Doctor’s degree in computer science OR experience in the above mentioned areas or at least scientific publications in these areas;
- Good knowledge of the Microsoft tools Word, Excel, Powerpoint, Outlook, Latex (any other tool will be considered as an advantage).
Closing date for applications: 8 February 2016
Contact: DIMITROVA, Marieta
Administrative assistant
itrust consulting sarl
55, rue Gabriel Lippmann
L-6947 Niederanven
LUXEMBOURG
tel: +352 26176212
e-mail: inf?@itrust.lu
Microsoft Research, Redmond, Washington
The Cryptography Research group at Microsoft Research in Redmond seeks outstanding applicants for Post-doctoral Researcher and Researcher positions in all areas of cryptography. Post-doctoral positions start in summer 2016 and are for a term of 2 years. Required qualifications include a PhD in computer science or mathematics and experience in cryptography research.
Particular areas of interest include: Secure Multi-party Computation, Searchable Encryption, and Homomorphic Encryption. Areas of interest in mathematics include lattice-based cryptography, cyclotomic number fields, code-based cryptography, post-quantum cryptography, factoring, discrete log, algorithmic number theory. Interest in practical protocols, and implementation and coding experience are required.
Review of applications will begin immediately.
Particular areas of interest include: Secure Multi-party Computation, Searchable Encryption, and Homomorphic Encryption. Areas of interest in mathematics include lattice-based cryptography, cyclotomic number fields, code-based cryptography, post-quantum cryptography, factoring, discrete log, algorithmic number theory. Interest in practical protocols, and implementation and coding experience are required.
Review of applications will begin immediately.
Closing date for applications: 1 March 2016
Contact: Kristin Lauter, Research Manager
E-Mail: klauter (at) microsoft.com
More information: http://research.microsoft.com/en-us/people/klauter/default.aspx
khalid Javeed, Xiaojun Wang
Modular multiplication is the fundamental and compute-intense operation in many Public-Key crypto-systems. This paper presents two modular multipliers with their efficient architectures based on Booth encoding, higher-radix, and Montgomery powering ladder approaches. Montgomery powering ladder technique enables concurrent execution of main operations in the proposed designs, while higher-radix techniques have been adopted to reduce an iteration count which formally dictates a cycle count. It is also shown that by an adopting Booth encoding logic in the designs helps to reduce their area cost with a slight degradation in the maximum achievable frequencies. The proposed designs are implemented in Verilog HDL and synthesized targeting virtex-6 FPGA platform using Xilinx ISE 14.2 Design suite. The radix-4 multiplier computes a 256-bit modular multiplication in 0.93 ms, occupies 1.6K slices, at 137.87 MHz in a cycle count of n/2+2, whereas the radix-8 multiplier completes the operation in 0.69ms, occupies 3.6K slices, achieves 123.43 MHz frequency in a cycle count of n/3+4. The implementation results reveals that the proposed designs consumes 18% lower FPGA slices without any significant performance degradation as compared to their best contemporary designs.
Gunnar Hartung, Björn Kaidel, Alexander Koch, Jessica Koch, Andy Rupp
Aggregate signature schemes allow for the creation of a short aggregate of multiple signatures. This feature leads to significant reductions of bandwidth and storage space in sensor networks, secure routing protocols, certificate chains, software authentication, and secure logging mechanisms. Unfortunately, in all prior schemes, adding a single invalid signature to a valid aggregate renders the whole aggregate invalid. Verifying such an invalid aggregate provides no information on the validity of any individual signature. Hence, adding a single faulty signature destroys the proof of integrity and authenticity for a possibly large amount of data. This is largely impractical in a range of scenarios, e.g. secure logging, where a single tampered log entry would render the aggregate signature of all log entries invalid.
In this paper, we introduce the notion of fault-tolerant aggregate signature schemes. In such a scheme, the verification algorithm is able to determine the subset of all messages belonging to an aggregate that were signed correctly, provided that the number of aggregated faulty signatures does not exceed a certain bound.
We give a generic construction of fault-tolerant aggregate signatures from ordinary aggregate signatures based on cover-free families. A signature in our scheme is a small vector of aggregated signatures of the underlying scheme. Our scheme is bounded, i.e. the number of signatures that can be aggregated into one signature must be fixed in advance. However the length of an aggregate signature is logarithmic in this number. We also present an unbounded construction, where the size of the aggregate signature grows linearly in the number of aggregated messages, but the factor in this linear function can be made arbitrarily small.
The additional information encoded in our signatures can also be used to speed up verification (compared to ordinary aggregate signatures) in cases where one is only interested in verifying the validity of a single message in an aggregate, a feature beyond fault-tolerance that might be of independent interest. For concreteness, we give an instantiation using a suitable cover-free family.
In this paper, we introduce the notion of fault-tolerant aggregate signature schemes. In such a scheme, the verification algorithm is able to determine the subset of all messages belonging to an aggregate that were signed correctly, provided that the number of aggregated faulty signatures does not exceed a certain bound.
We give a generic construction of fault-tolerant aggregate signatures from ordinary aggregate signatures based on cover-free families. A signature in our scheme is a small vector of aggregated signatures of the underlying scheme. Our scheme is bounded, i.e. the number of signatures that can be aggregated into one signature must be fixed in advance. However the length of an aggregate signature is logarithmic in this number. We also present an unbounded construction, where the size of the aggregate signature grows linearly in the number of aggregated messages, but the factor in this linear function can be made arbitrarily small.
The additional information encoded in our signatures can also be used to speed up verification (compared to ordinary aggregate signatures) in cases where one is only interested in verifying the validity of a single message in an aggregate, a feature beyond fault-tolerance that might be of independent interest. For concreteness, we give an instantiation using a suitable cover-free family.
Jialin Huang, Serge Vaudenay, Xuejia Lai, Kaisa Nyberg
Multidimensional linear attacks are one of the most powerful variants of linear cryptanalytic techniques now.
However, there is no knowledge on the key-dependent capacity and data complexity so far.
Their values were assumed to be close to the average value for a vast majority of keys.
This assumption is not accurate. In this paper, under a reasonable condition, we explicitly formulate the capacity as a Gamma distribution and the data complexity as an Inverse Gamma distribution, in terms of the average linear potential and the dimension. The capacity distribution is experimentally verified on the 5-round PRESENT.
Regarding complexity, we solve the problem of estimating the average data complexity, which was difficult to estimate because of the existence of zero correlations. We solve the problem of using the median complexity in multidimensional linear attacks, which is an open problem since proposed in Eurocrypt 2011. We also evaluate the difference among the median complexity, the average complexity and a lower bound of the average complexity -- the reciprocal of average capacity. In addition, we estimate more accurately the key equivalent hypothesis, and reveal the fact that the average complexity only provides an accurate estimate for less than half of the keys no matter how many linear approximations are involved.
Finally, we revisit the so far best attack on PRESENT based on our theoretical result.
Regarding complexity, we solve the problem of estimating the average data complexity, which was difficult to estimate because of the existence of zero correlations. We solve the problem of using the median complexity in multidimensional linear attacks, which is an open problem since proposed in Eurocrypt 2011. We also evaluate the difference among the median complexity, the average complexity and a lower bound of the average complexity -- the reciprocal of average capacity. In addition, we estimate more accurately the key equivalent hypothesis, and reveal the fact that the average complexity only provides an accurate estimate for less than half of the keys no matter how many linear approximations are involved.
Finally, we revisit the so far best attack on PRESENT based on our theoretical result.
20 January 2016
Honolulu , USA, 7 September - 9 September 2016
Event date: 7 September to 9 September 2016
Submission deadline: 7 March 2016
Notification: 25 April 2016
Submission deadline: 7 March 2016
Notification: 25 April 2016