International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

09 June 2021

Ghous Amjad, Sarvar Patel, Giuseppe Persiano, Kevin Yeo, Moti Yung
ePrint Report ePrint Report
We study encrypted storage schemes where a client outsources data to an untrusted third-party server (such as a cloud storage provider) while maintaining the ability to privately query and dynamically update the stored data. We focus on encrypted multi-maps, a structured encryption (STE) scheme that stores pairs of label and value tuples that have several important applications (most notably, to searchable encryption and encrypted SQL databases). Encrypted multi-maps support queries for specific labels that return the associated value tuple. As responses are variable-length, encrypted multi-maps are subject to volume leakage attacks introduced by Kellaris et al. [CCS’16] with several follow-up works (including Grubbs et al. [CCS’18] and Lacharite et al. [S&P’18]). To prevent these attacks, volume-hiding encrypted multi-maps were introduced by Kamara and Moataz [Eurocrypt’19] that hide the volume of labels (i.e., the size of the associated value tuple).

As our main contribution, we present the first fully dynamic constructions of volume-hiding encrypted multi-maps that are both asymptotically and concretely efficient. Furthermore, our constructions simultaneously provide forward and backward privacy that are the de-facto standard security notions for dynamic STE schemes (Stefanov et al. [NDSS’14] and Bost [CCS’16]). Additionally, we implement our schemes to showcase their concrete efficiency. Our experimental evaluations show that our constructions are able to add dynamicity with minimal to no additional cost compared to the prior best static volume-hiding schemes of Patel et al. [CCS’20].
Expand
Ran Canetti, Ari Karchmer
ePrint Report ePrint Report
We consider the task of learning a function via oracle queries, where the queries and responses are monitored (and perhaps also modified) by an untrusted intermediary. Our goal is twofold: First, we would like to prevent the intermediary from gaining any information about either the function or the learner's intentions (e.g. the particular hypothesis class the learner is considering). Second, we would like to curb the intermediary's ability to meaningfully interfere with the learning process, even when it can modify the oracles' responses. Inspired by the works of Ishai et al. (Crypto 2019) and Goldwasser et al. (ITCS 2021), we formalize two new learning models, called Covert Learning and Covert Verifiable Learning, that capture these goals. Then, assuming hardness of the Learning Parity with Noise (LPN) problem, we show:

1. Covert Learning algorithms in the agnostic setting for parity functions and decision trees, where a polynomial time eavesdropping adversary that observes all queries and responses learns nothing about either the function, or the learned hypothesis. 2. Covert Verifiable Learning algorithms that provide similar learning and privacy guarantees, even in the presence of a polynomial-time adversarial intermediary that can modify all oracle responses. Here the learner is granted additional random examples and is allowed to abort whenever the oracles responses are modified.

Aside theoretical interest, our study is motivated by applications to the outsourcing of automated scientific discovery in drug design and molecular biology. It also uncovers limitations of current techniques for defending against model extraction attacks.
Expand
Mathy Vanhoef
ePrint Report ePrint Report
In this paper, we present three design flaws in the 802.11 standard that underpins Wi-Fi. One design flaw is in the frame aggregation functionality, and another two are in the frame fragmentation functionality. These design flaws enable an adversary to forge encrypted frames in various ways, which in turn enables exfiltration of sensitive data. We also discovered common implementation flaws related to aggregation and fragmentation, which further worsen the impact of our attacks. Our results affect all protected Wi-Fi networks, ranging from WEP all the way to WPA3, meaning the discovered flaws have been part of Wi-Fi since its release in 1997. In our experiments, all devices were vulnerable to one or more of our attacks, confirming that all Wi-Fi devices are likely affected. Finally, we present a tool to test whether devices are affected by any of the vulnerabilities, and we discuss countermeasures to prevent our attacks.
Expand
Claude Carlet
ePrint Report ePrint Report
Designing Boolean functions with fast output to compute and meeting all the criteria necessary for allowing the stream ciphers in which they are used as nonlinear components to resist all major attacks has been an open problem since the beginning of this century, when the algebraic attacks were invented (in 2003). Functions allowing good resistance are known since 2008, but their output is too slow and complex to compute. Functions with a fast and simple to compute output are known, such as majority functions and the so-called hidden weight bit (HWB) functions, but they all have a cryptographic weakness: their too small nonlinearity. \\In the present paper, we introduce a generalization of the HWB function into a construction of $n$-variable balanced functions $f$ from $(n-1)$-variable Boolean functions $g$ having some property held by a large number of functions. Function $f$ is defined by its support, equal to the image set of a vectorial function depending on $g$. This makes the function complex enough for allowing good cryptographic parameters, while its output is light to compute. The HWB function is what we obtain with $f$ when the initial function $g$ equals 1. Other well chosen functions $g$ provide functions $f$ having good cryptographic parameters. We analyze the constructed functions $f$, we provide a fast way to compute their output, we determine their algebraic normal forms and we show that, most often, their algebraic degree is optimal. We study their Walsh transform and their nonlinearity and algebraic immunity. We observe with computer investigations that this generalization of the HWB function allows to keep its quality of being fast to compute and having good enough algebraic immunity, while significantly improving its nonlinearity. The functions already obtained in the investigations provide a quite good (and never reached before) trade-off between speed and security. Further (probably difficult) work should allow obtaining, among such generalized HWB functions whose number is huge, still better filter functions to be used in stream ciphers.
Expand
Claude Carlet
ePrint Report ePrint Report
Despite intensive research on Boolean functions for cryptography for over thirty years, there are very few known general constructions allowing to satisfy all the necessary criteria for ensuring the resistance against all the main known attacks on the stream ciphers using them. In this paper, we investigate the general construction of Boolean functions $f$ from vectorial functions, in which the support of $f$ equals the image set of an injective vectorial function $F$, that we call a parameterization of $f$. Any Boolean function whose Hamming weight is a power of 2, and in particular, every balanced Boolean function, can be obtained this way. We study five illustrations of this general construction. The three first correspond to known classes of functions (Maiorana-McFarland, majority functions and balanced functions in odd numbers of variables with optimal algebraic immunity). The two last correspond to new classes of Boolean functions: - sums of indicators of disjoint graphs of $(k,n-k$)-functions, - functions parameterized by highly nonlinear injective vectorial $(n-1,n)$-functions derived from functions due to Beelen and Leander. We study the cryptographic parameters (corresponding to the main criteria) of balanced Boolean functions, according to those of their parameterizations: the algebraic degree of $f$, that we relate to the algebraic degrees of $F$ and of its graph indicator, the nonlinearity of $f$, that we relate by a bound to the nonlinearity of $F$, and the algebraic immunity (AI), whose optimality is related to a natural question in linear algebra, and which may be handled (in two ways) by means of the graph indicator of $F$. We show how the algebraic degree and the nonlinearity of the parameterized function can be controlled. We revisit each of the five classes for each criterion. We show that the fourth class is very promising, thanks to a lower bound on the nonlinearity by means of the nonlinearity of the chosen $(k,n-k$)-functions. Its sub-class made of the sums of indicators of affine functions, for which we prove an upper bound on the nonlinearity, seems also interesting. The fifth class includes functions with optimal algebraic degree, good nonlinearity and good AI. We leave for future works the determination of simple effective sufficient conditions on $F$ ensuring that $f$ has a good AI, the completion of the study of the fourth class, the mathematical study of the AI and fast algebraic immunity of the functions in the fifth class, and the introduction and study of a class of parameterized functions having good parameters and whose output is fast to compute.
Expand

07 June 2021

Kolkata, INDIA, 2 December - 4 December 2021
Event Calendar Event Calendar
Event date: 2 December to 4 December 2021
Submission deadline: 31 August 2021
Notification: 31 October 2021
Expand
Brandenburgische Technische Universität Cottbus-Senftenberg
Job Posting Job Posting
The chair of IT Security is currently seeking a highly motivated: Junior Researcher / PhD Student (limited to 2 years, full time, with possibility for extension).
Tasks:
- Active research in the area of intrusion detection systems (IDS) for critical infrastructures, secure cyber-physical systems, and artificial intelligence / machine learning for traffic analysis
- Implementation and evaluation of new algorithms and methods
- Cooperation and knowledge transfer with industrial partners
- Publication of scientific results
- Assistance with teaching
The employment takes place with the goal of doctoral graduation (obtaining a PhD degree).
Requirements:
- Master’s degree (or equivalent) in Computer Science or related disciplines
- Strong interest in IT security and/or networking and distributed systems
- Knowledge of at least one programming language (C++, Java, etc.) and one scripting language (Perl, Python, etc.) or strong willingness to quickly learn new programming languages
- Linux/Unix skills
- Knowledge of data mining, machine learning, statistics and result visualization concepts is of advantage
- Excellent working knowledge of English; German is of advantage
- Excellent communication skills
Applications containing the following documents:
- A detailed Curriculum Vitae
- Transcript of records from your Master studies
- An electronic version of your Master thesis, if possible
should be sent in a single PDF file as soon as possible, but not later than 10.06.2021 at itsec-jobs.informatik@lists.b-tu.de.

Closing date for applications:

Contact: Prof. A. Panchenko (E-Mail: itsec-jobs.informatik@lists.b-tu.de)

Expand
University of the West of England Bristol (UWE Bristol)
Job Posting Job Posting
A full funded (at UK-home rate) PhD studentship is available at UWE Bristol. The studentship aims at utilising blockchains to improve the security of IoT. UWE is an accredited (by the UK's National Cyber Security Centre (NCSC)) Centre of Excellence in Cyber Security Education. You will be joining the Cyber Security team (http://www.cems.uwe.ac.uk/~pa-legg/uwecyber/) part of the Computer Science Research Centre (https://www.uwe.ac.uk/research/centres-and-groups/csrc). The student will be supervised by Dr Djamel Djenouri , Dr Essam Ghadafi and Dr Chris Carr. The deadline for applications is 1st July 2021.

Closing date for applications:

Contact: Dr Essam Ghadafi (essam.ghadafi@uwe.ac.uk) or Dr Djamel Djenouri (djamel.djenouri@uwe.ac.uk)

More information: https://www.uwe.ac.uk/research/postgraduate-research-study/how-to-apply/studentship-opportunities/securing-iot-over-wireless-networks

Expand
Saravanan Vijayakumaran
ePrint Report ePrint Report
Transactions in CryptoNote blockchains induce a bipartite graph, with the set of transaction outputs forming one vertex class and the set of key images forming the other vertex class. In this graph, an edge exists between an output and a key image if the output appeared in the ring of the linkable ring signature which created the key image. Any maximum matching on this graph is a plausible candidate for the ground truth, i.e.~the association of each key image with the actual output being spent in the transaction.

The Dulmage-Mendelsohn (DM) decomposition of a bipartite graph reveals constraints which are satisfied by every maximum matching on the graph. It identifies vertices which are matched in every maximum matching. It classifies edges as \textit{admissible} or \textit{inadmissible}. An edge is called admissible if it appears in at least one maximum matching and is called inadmissible if it does not appear in any maximum matching.

The DM decomposition of a CryptoNote transaction graph reveals a set of outputs which can be marked as spent (precisely those outputs which are matched by every maximum matching). In some transaction rings, the decomposition identifies the true output being spent (making the ring traceable) by classifying the edges from all the other outputs to the key image as inadmissible.

For pre-RingCT outputs in Monero, the DM decomposition performs better than existing techniques for Monero traceability, but the improvement is marginal. For RingCT outputs in Monero up to April 1, 2021, the DM decomposition is only able to identify the same five outputs that were identified as spent by existing techniques (which do not use information from hard forks).
Expand
Wenting Zheng, Ryan Deng, Weikeng Chen, Raluca Ada Popa, Aurojit Panda, Ion Stoica
ePrint Report ePrint Report
Many organizations need large amounts of high-quality data for their applications, and one way to acquire such data is to combine datasets from multiple parties. Since these organizations often own sensitive data that cannot be shared in the clear with others due to policy regulation and business competition, there is increased interest in utilizing secure multi-party computation (MPC). MPC allows multiple parties to jointly compute a function without revealing their inputs to each other.

We present Cerebro, an end-to-end collaborative learning platform that enables parties to compute learning tasks without sharing plaintext data. By taking an end-to-end approach to the system design, Cerebro allows multiple parties with complex economic relationships to safely collaborate on machine learning computation through the use of release policies and auditing, while also enabling users to achieve good performance without manually navigating the complex performance tradeoffs between MPC protocols.
Expand
Koji Nagata, Renata Wong, Do Ngoc Diep , Tadao Nakamura
ePrint Report ePrint Report
We study a quantum cryptography based on an algorithm for determining simultaneously all the mappings of a Boolean function using an entangled state. The security of our cryptography is based on the Ekert 1991 protocol, which uses an entangled state. Eavesdropping destroys the entanglement. Alice selects a secret function from the number of possible function types. Bob's aim is then to determine the selected function (a key) without an eavesdropper learning it. In order for both Alice and Bob to be able to select the same function classically, in the worst case Bob requires multiple queries to Alice. In the quantum case however, Bob requires just a single query. By measuring the single entangled state, which is sent to him by Alice, Bob can obtain the function that Alice selected. This quantum key distribution method is faster compared to the multiple queries that would be required in the classical case.
Expand
Jiaxin Wang Fang-Wei Fu
ePrint Report ePrint Report
In this paper, we study the dual of generalized bent functions $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ where $V_{n}$ is an $n$-dimensional vector space over $\mathbb{F}_{p}$ and $p$ is an odd prime, $k$ is a positive integer. It is known that weakly regular generalized bent functions always appear in pairs since the dual of a weakly regular generalized bent function is also a weakly regular generalized bent function. The dual of non-weakly regular generalized bent functions can be generalized bent or not generalized bent. By generalizing the construction of \cite{Cesmelioglu5}, we obtain an explicit construction of generalized bent functions for which the dual can be generalized bent or not generalized bent. We show that the generalized indirect sum construction method given in \cite{Wang} can provide a secondary construction of generalized bent functions for which the dual can be generalized bent or not generalized bent. By using the knowledge on ideal decomposition in cyclotomic field, we prove that $f^{**}(x)=f(-x)$ if $f$ is a generalized bent function and its dual $f^{*}$ is also a generalized bent function. For any non-weakly regular generalized bent function $f$ which satisfies that $f(x)=f(-x)$ and its dual $f^{*}$ is generalized bent, we give a property and as a consequence, we prove that there is no self-dual generalized bent function $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd. For $p \equiv 1 \ (mod \ 4)$ or $p\equiv 3 \ (mod \ 4)$ and $n$ is even, we give a secondary construction of self-dual generalized bent functions. In the end, we characterize the relations between the generalized bentness of the dual of generalized bent functions and the bentness of the dual of bent functions, as well as the self-duality relations between generalized bent functions and bent functions by the decomposition of generalized bent functions.
Expand
Si Gao, Elisabeth Oswald
ePrint Report ePrint Report
Leakage attacks and simulators strongly rely on crucial knowledge about the state that is being leaked on. Despite 20 years of effort, in terms of how to find the relevant state, we did not actually go very far: to date, we still constantly assume users already know the state, or users can reliably find it based on a few attack trials and their own experience. This is far from the truth that is encountered in practice: whilst software platforms give an illusion of a sequential update to variables, the reality in the underlying hardware is that previous values remain part of the state and many things happen in parallel. We put forward a novel notion for the "completeness" of an assumed state, together with an efficient statistical test that is based on "collapsed models". This test can even cope in a grey box setting where the state contains multiple 32-bit variables. We illustrate how our novel test can help to guide attacks and leakage simulators, reveal new form of leakage that is previously unknown and deepen our understanding of the realistic leakage as well as the underlying architecture.
Expand
Nishat Koti, Arpita Patra, Rahul Rachuri, Ajith Suresh
ePrint Report ePrint Report
In this work, we design an efficient mixed-protocol framework, Tetrad, with applications to privacy-preserving machine learning. It is designed for the four-party setting with at most one active corruption and supports rings.

Our fair multiplication protocol requires communicating only $5$ ring elements improving over the state-of-the-art protocol of Trident (Chaudhari et al. NDSS'20). The technical highlights of Tetrad include efficient (a) truncation without any overhead, (b) multi-input multiplication protocols for arithmetic and boolean worlds, (c) garbled-world, tailor-made for the mixed-protocol framework, and (d) conversion mechanisms to switch between the computation styles. The fair framework is also extended to provide robustness without inflating the costs.

The competence of Tetrad is tested with benchmarks for deep neural networks such as LeNet and VGG16 and support vector machines. One variant of our framework aims at minimizing the execution time, while the other focuses on the monetary cost. We observe improvements up to $6\times$ over Trident across these parameters.
Expand
Samuel Adams, Chaitali Choudhary, Martine De Cock, Rafael Dowsley, David Melanson, Anderson C. A. Nascimento, Davis Railsback, Jianwei Shen
ePrint Report ePrint Report
Most existing Secure Multi-Party Computation (MPC) protocols for privacy-preserving training of decision trees over distributed data assume that the features are categorical. In real-life applications, features are often numerical. The standard ``in the clear'' algorithm to grow decision trees on data with continuous values requires sorting of training examples for each feature in the quest for an optimal cut-point in the range of feature values in each node. Sorting is an expensive operation in MPC, hence finding secure protocols that avoid such an expensive step is a relevant problem in privacy-preserving machine learning. In this paper we propose three more efficient alternatives for secure training of decision tree based models on data with continuous features, namely: (1) secure discretization of the data, followed by secure training of a decision tree over the discretized data; (2) secure discretization of the data, followed by secure training of a random forest over the discretized data; and (3) secure training of extremely randomized trees (``extra-trees'') on the original data. Approaches (2) and (3) both involve randomizing feature choices. In addition, in approach (3) cut-points are chosen randomly as well, thereby alleviating the need to sort or to discretize the data up front. We implemented all proposed solutions in the semi-honest setting with additive secret sharing based MPC. In addition to mathematically proving that all proposed approaches are correct and secure, we experimentally evaluated and compared them in terms of classification accuracy and runtime. We privately train tree ensembles over data sets with 1000s of instances or features in a few minutes, with accuracies that are at par with those obtained in the clear. This makes our solution orders of magnitude more efficient than the existing approaches, which are based on oblivious sorting.
Expand
Abida Haque, Varun Madathil, Bradley Reaves, Alessandra Scafuro
ePrint Report ePrint Report
Cellular networks connect nearly every human on the planet; they consequently have visibility into location data and voice, SMS, and data contacts and communications. Such near-universal visibility represents a significant threat to the privacy of mobile subscribers. In 5G networks, end-user mobile device manufacturers assign a Permanent Equipment Identifier (PEI) to every new device. Mobile operators legitimately use the PEI to blocklist stolen devices from the network to discourage device theft, but the static PEI also provides a mechanism to uniquely identify and track subscribers. Advertisers and data brokers have also historically abused the PEI for data fusion of location and analytics data, including private data sold by cellular providers.

In this paper, we present a protocol that allows mobile devices to prove that they are not in the blocklist without revealing their PEI to any entity on the network. Thus, we maintain the primary purpose of the PEI while preventing potential privacy violations. We describe provably secure anonymous proof of blocklist non-membership for cellular network, based on the RSA accumulators and zero-knowledge proofs introduced by Camenisch and Lysyanskaya (Crypto'02) and expanded upon by Li, Li and Xue (ACNS'07). We show experimentally that this approach is viable for cellular networks: a phone can create a blocklist non-membership proof in only 3432 milliseconds of online computation, and the network can verify the proof in less than one second on average. In total this adds fewer than 4.5 seconds to the rare network attach process. This work shows that PEIs can be attested anonymously in 5G and future network generations, and it paves the way for additional advances toward a cellular network with guaranteed privacy.
Expand
Thomas Debris-Alazard, Maxime Remaud, Jean-Pierre Tillich
ePrint Report ePrint Report
We give a quantum reduction from finding short codewords in a random linear code to decoding for the Hamming metric. This is the first time such a reduction (classical or quantum) has been obtained. Our reduction adapts to linear codes Stehl\'{e}-Steinfield-Tanaka-Xagawa’ re-interpretation of Regev’s quantum reduction from finding short lattice vectors to solving the Closest Vector Problem. The Hamming metric is a much coarser metric than the Euclidean metric and this adaptation has needed several new ingredients to make it work. For instance, in order to have a meaningful reduction it is necessary in the Hamming metric to choose a very large decoding radius and this needs in many cases to go beyond the radius where decoding is unique. Another crucial step for the analysis of the reduction is the choice of the errors that are being fed to the decoding algorithm. For lattices, errors are usually sampled according to a Gaussian distribution. However, it turns out that the Bernoulli distribution (the analogue for codes of the Gaussian) is too much spread out and can not be used for the reduction with codes. Instead we choose here the uniform distribution over errors of a fixed weight and bring in orthogonal polynomials tools to perform the analysis and an additional amplitude amplification step to obtain the aforementioned result.
Expand
Martin Hell, Thomas Johansson, Alexander Maximov, Willi Meier, Hirotaka Yoshida
ePrint Report ePrint Report
Properties of the Grain-128AEAD key re-introduction, as part of the cipher initialization, are analyzed and discussed. We consider and analyze several possible alternatives for key re-introduction and identify weaknesses, or potential weaknesses, in them. Our results show that it seems favorable to separate the state initialization, the key re-introduction, and the $A/R$ register initialization into three separate phases. Based on this, we propose a new cipher initialization and update the cipher version to Grain-128AEADv2. It can be noted that previously reported and published analysis of the cipher remains valid also for this new version.
Expand
Carsten Baum, Lennart Braun, Alexander Munch-Hansen, Peter Scholl
ePrint Report ePrint Report
Zero-Knowledge proofs are highly flexible cryptographic protocols that are an important building block for many secure systems. Typically, these are defined with respect to statements that are formulated as arithmetic operations over a fixed finite field. This inflexibility is a disadvantage when it comes to complex programs, as some fields are more amenable to express certain operations than others. At the same time, there do not seem to be many proofs with a programming model similar to those found in modern computer architectures that perform arithmetic with 32 or 64 bit integers.

In this work, we present solutions to both of these problems. First, we show how to efficiently check consistency of secret values between different instances of Zero Knowledge protocols based on the commit-and-prove paradigm. This allows a protocol user to easily switch to the most efficient representation for a given task. To achieve this, we modify the extended doubly-authenticated bits (edabits) approach by Escudero et al. (Crypto 2020), originally developed for MPC, and optimize it for the Zero-Knowledge setting. As an application of our consistency check, we also introduce protocols for efficiently verifying truncations and comparisons of shared values both modulo a large prime $p$ and modulo $2^k$.

Finally, we complement our conversion protocols with new protocols for verifying arithmetic statements in $\mathbb{Z}_{2^k}$. Here, we build upon recent interactive proof systems based on information-theoretic MACs and vector oblivious linear evaluation (VOLE), and show how this paradigm can be adapted to the ring setting. In particular, we show that supporting such modular operations natively in a proof system can be almost as efficient as proofs over large fields or bits, and this also easily plugs into our framework for Zero Knowledge conversions.
Expand
Mike Rosulek, Lawrence Roy
ePrint Report ePrint Report
We describe a garbling scheme for boolean circuits, in which XOR gates are free and AND gates require communication of $1.5\kappa + 5$ bits. This improves over the state-of-the-art "half-gates" scheme of Zahur, Rosulek, and Evans (Eurocrypt 2015), in which XOR gates are free and AND gates cost $2\kappa$ bits. The half-gates paper proved a lower bound of $2\kappa$ bits per AND gate, in a model that captured all known garbling techniques at the time. We bypass this lower bound with a novel technique that we call slicing and dicing, which involves slicing wire labels in half and operating separately on those halves. Ours is the first to bypass the lower bound while being fully compatible with free-XOR, making it a drop-in replacement for half-gates. Our construction is proven secure from a similar assumption to prior free-XOR garbling (circular correlation-robust hash), and uses only slightly more computation than half-gates.
Expand
◄ Previous Next ►