International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

07 June 2021

Samuel Adams, Chaitali Choudhary, Martine De Cock, Rafael Dowsley, David Melanson, Anderson C. A. Nascimento, Davis Railsback, Jianwei Shen
ePrint Report ePrint Report
Most existing Secure Multi-Party Computation (MPC) protocols for privacy-preserving training of decision trees over distributed data assume that the features are categorical. In real-life applications, features are often numerical. The standard ``in the clear'' algorithm to grow decision trees on data with continuous values requires sorting of training examples for each feature in the quest for an optimal cut-point in the range of feature values in each node. Sorting is an expensive operation in MPC, hence finding secure protocols that avoid such an expensive step is a relevant problem in privacy-preserving machine learning. In this paper we propose three more efficient alternatives for secure training of decision tree based models on data with continuous features, namely: (1) secure discretization of the data, followed by secure training of a decision tree over the discretized data; (2) secure discretization of the data, followed by secure training of a random forest over the discretized data; and (3) secure training of extremely randomized trees (``extra-trees'') on the original data. Approaches (2) and (3) both involve randomizing feature choices. In addition, in approach (3) cut-points are chosen randomly as well, thereby alleviating the need to sort or to discretize the data up front. We implemented all proposed solutions in the semi-honest setting with additive secret sharing based MPC. In addition to mathematically proving that all proposed approaches are correct and secure, we experimentally evaluated and compared them in terms of classification accuracy and runtime. We privately train tree ensembles over data sets with 1000s of instances or features in a few minutes, with accuracies that are at par with those obtained in the clear. This makes our solution orders of magnitude more efficient than the existing approaches, which are based on oblivious sorting.
Expand
Abida Haque, Varun Madathil, Bradley Reaves, Alessandra Scafuro
ePrint Report ePrint Report
Cellular networks connect nearly every human on the planet; they consequently have visibility into location data and voice, SMS, and data contacts and communications. Such near-universal visibility represents a significant threat to the privacy of mobile subscribers. In 5G networks, end-user mobile device manufacturers assign a Permanent Equipment Identifier (PEI) to every new device. Mobile operators legitimately use the PEI to blocklist stolen devices from the network to discourage device theft, but the static PEI also provides a mechanism to uniquely identify and track subscribers. Advertisers and data brokers have also historically abused the PEI for data fusion of location and analytics data, including private data sold by cellular providers.

In this paper, we present a protocol that allows mobile devices to prove that they are not in the blocklist without revealing their PEI to any entity on the network. Thus, we maintain the primary purpose of the PEI while preventing potential privacy violations. We describe provably secure anonymous proof of blocklist non-membership for cellular network, based on the RSA accumulators and zero-knowledge proofs introduced by Camenisch and Lysyanskaya (Crypto'02) and expanded upon by Li, Li and Xue (ACNS'07). We show experimentally that this approach is viable for cellular networks: a phone can create a blocklist non-membership proof in only 3432 milliseconds of online computation, and the network can verify the proof in less than one second on average. In total this adds fewer than 4.5 seconds to the rare network attach process. This work shows that PEIs can be attested anonymously in 5G and future network generations, and it paves the way for additional advances toward a cellular network with guaranteed privacy.
Expand
Thomas Debris-Alazard, Maxime Remaud, Jean-Pierre Tillich
ePrint Report ePrint Report
We give a quantum reduction from finding short codewords in a random linear code to decoding for the Hamming metric. This is the first time such a reduction (classical or quantum) has been obtained. Our reduction adapts to linear codes Stehl\'{e}-Steinfield-Tanaka-Xagawa’ re-interpretation of Regev’s quantum reduction from finding short lattice vectors to solving the Closest Vector Problem. The Hamming metric is a much coarser metric than the Euclidean metric and this adaptation has needed several new ingredients to make it work. For instance, in order to have a meaningful reduction it is necessary in the Hamming metric to choose a very large decoding radius and this needs in many cases to go beyond the radius where decoding is unique. Another crucial step for the analysis of the reduction is the choice of the errors that are being fed to the decoding algorithm. For lattices, errors are usually sampled according to a Gaussian distribution. However, it turns out that the Bernoulli distribution (the analogue for codes of the Gaussian) is too much spread out and can not be used for the reduction with codes. Instead we choose here the uniform distribution over errors of a fixed weight and bring in orthogonal polynomials tools to perform the analysis and an additional amplitude amplification step to obtain the aforementioned result.
Expand
Martin Hell, Thomas Johansson, Alexander Maximov, Willi Meier, Hirotaka Yoshida
ePrint Report ePrint Report
Properties of the Grain-128AEAD key re-introduction, as part of the cipher initialization, are analyzed and discussed. We consider and analyze several possible alternatives for key re-introduction and identify weaknesses, or potential weaknesses, in them. Our results show that it seems favorable to separate the state initialization, the key re-introduction, and the $A/R$ register initialization into three separate phases. Based on this, we propose a new cipher initialization and update the cipher version to Grain-128AEADv2. It can be noted that previously reported and published analysis of the cipher remains valid also for this new version.
Expand
Carsten Baum, Lennart Braun, Alexander Munch-Hansen, Peter Scholl
ePrint Report ePrint Report
Zero-Knowledge proofs are highly flexible cryptographic protocols that are an important building block for many secure systems. Typically, these are defined with respect to statements that are formulated as arithmetic operations over a fixed finite field. This inflexibility is a disadvantage when it comes to complex programs, as some fields are more amenable to express certain operations than others. At the same time, there do not seem to be many proofs with a programming model similar to those found in modern computer architectures that perform arithmetic with 32 or 64 bit integers.

In this work, we present solutions to both of these problems. First, we show how to efficiently check consistency of secret values between different instances of Zero Knowledge protocols based on the commit-and-prove paradigm. This allows a protocol user to easily switch to the most efficient representation for a given task. To achieve this, we modify the extended doubly-authenticated bits (edabits) approach by Escudero et al. (Crypto 2020), originally developed for MPC, and optimize it for the Zero-Knowledge setting. As an application of our consistency check, we also introduce protocols for efficiently verifying truncations and comparisons of shared values both modulo a large prime $p$ and modulo $2^k$.

Finally, we complement our conversion protocols with new protocols for verifying arithmetic statements in $\mathbb{Z}_{2^k}$. Here, we build upon recent interactive proof systems based on information-theoretic MACs and vector oblivious linear evaluation (VOLE), and show how this paradigm can be adapted to the ring setting. In particular, we show that supporting such modular operations natively in a proof system can be almost as efficient as proofs over large fields or bits, and this also easily plugs into our framework for Zero Knowledge conversions.
Expand
Mike Rosulek, Lawrence Roy
ePrint Report ePrint Report
We describe a garbling scheme for boolean circuits, in which XOR gates are free and AND gates require communication of $1.5\kappa + 5$ bits. This improves over the state-of-the-art "half-gates" scheme of Zahur, Rosulek, and Evans (Eurocrypt 2015), in which XOR gates are free and AND gates cost $2\kappa$ bits. The half-gates paper proved a lower bound of $2\kappa$ bits per AND gate, in a model that captured all known garbling techniques at the time. We bypass this lower bound with a novel technique that we call slicing and dicing, which involves slicing wire labels in half and operating separately on those halves. Ours is the first to bypass the lower bound while being fully compatible with free-XOR, making it a drop-in replacement for half-gates. Our construction is proven secure from a similar assumption to prior free-XOR garbling (circular correlation-robust hash), and uses only slightly more computation than half-gates.
Expand
Ke Wu, Gilad Asharov, Elaine Shi (random author ordering)
ePrint Report ePrint Report
Cleve’s celebrated lower bound (STOC’86) showed that a de facto strong fairness notion is impossible in 2-party coin toss, i.e., the corrupt party always has a strategy of biasing the honest party’s outcome by a noticeable amount. Nonetheless, Blum’s famous coin-tossing protocol(CRYPTO’81) achieves a strictly weaker “game-theoretic” notion of fairness — specifically, it is a 2-party coin toss protocol in which neither party can bias the outcome towards its own preference; and thus the honest protocol forms a Nash equilibrium in which neither party would want to deviate. Surprisingly, an n-party analog of Blum’s famous coin toss protocol was not studied till recently. The elegant work by Chung et al. was the first to explore the feasibility of game-theoretically fair n-party coin toss in the presence of corrupt majority. We may assume that each party has a publicly stated preference for either the bit 0 or 1, and if the outcome agrees with the party’s preference, it obtains utility 1; else it obtains nothing.A natural game-theoretic formulation is to require that the honest protocol form a coalition-resistant Nash equilibrium, i.e., no coalition should have incentive to deviate from the honest behavior. Chung et al. phrased this game-theoretic notion as “cooperative-strategy-proofness”or “CSP-fairness” for short. Unfortunately, Chung et al. showed that under (n−1)-sized coalitions, it is impossible to design such a CSP-fair coin toss protocol, unless all parties except one prefer the same bit.In this paper, we show that the impossibility of Chung et al. is in fact not as broad as it may seem. When coalitions are majority but not n−1 in size, we can indeed get feasibility results in some meaningful parameter regimes. We give a complete characterization of the regime in whichCSP-fair coin toss is possible, by providing a matching upper- and lower-bound. Our complete characterization theorem also shows that the mathematical structure of game-theoretic fairness is starkly different from the de facto strong fairness notion in the multi-party computation literature.
Expand
Aggelos Kiayias, Orfeas Stefanos Thyfronitis Litos
ePrint Report ePrint Report
A dominant approach towards the solution of the scalability problem in blockchain systems has been the development of layer 2 protocols and specifically payment channel networks (PCNs) such as the Lightning Network (LN) over Bitcoin. Routing payments over LN requires the coordination of all path intermediaries in a multi-hop round trip that encumbers the layer 2 solution both in terms of responsiveness as well as privacy. The issue is resolved by “virtual channel” protocols that, capitalizing on a suitable setup operation, enable the two endpoints to engage as if they had a direct payment channel between them.

Apart from communication efficiency, virtual channel constructions have three natural desiderata. A virtual channel constructor is recursive if it can also be applied on pre-existing virtual channels, variadic if it can be applied on any number of pre-existing channels and symmetric if it encumbers in an egalitarian fashion all channel participants both in optimistic and pessimistic execution paths. We put forth the first Bitcoin-suitable recursive variadic virtual channel construction. Furthermore our virtual channel constructor is symmetric and offers optimal round complexity both in the optimistic and pessimistic execution paths. Our virtual channels can be implemented over Bitcoin assuming the ANYPREVOUT signature type, a feature that we prove necessary for any efficient protocol that has parties maintain a set of Bitcoin transactions in their local state. We express and prove the security of our construction in the universal composition setting.
Expand
Nitin Pundir, Fahim Rahman, Farimah Farahmandi, Mark Tehranipoor
ePrint Report ePrint Report
Field Programmable Gate Arrays (FPGAs) used as hardware accelerators in the cloud domain allow end-users to accelerate their custom applications while ensuring minimal dynamic power consumption. Cloud infrastructures aim to maximize profit by achieving optimized resource sharing among its cloud users. However, the FPGAs' reconfigurable nature poses unique security and privacy challenges in a shared cloud environment. In this paper, we aim to understand the interactions between FPGA and the host servers on the cloud to analyze FaaS platforms' security. We propose a vulnerability taxonomy based on the runtime attributes of the FaaS platforms. The taxonomy aims to assist the identification of critical sources of vulnerabilities in the platform in allowing focused security verification. We demonstrate the proof-of-concept by characterizing the potential source of vulnerabilities in the Stratix-10 FaaS platforms. We then focused on only one major source to perform more focused verification. The proof-of-concept is demonstrated by identifying the potential source of vulnerabilities in the Stratix-10 FaaS platforms. Then, to conduct more focused verification, we narrowed our focus to only one major source. It aided in the identification of several low-level software vulnerabilities. The discovered vulnerabilities could be remotely exploited to cause denial-of-service and information leakage attacks. The concerned entities have released software updates to address the vulnerabilities.
Expand
Gili Schul-Ganz, Gil Segev
ePrint Report ePrint Report
Following the pioneering work of Boneh and Franklin (CRYPTO '01), the challenge of constructing an identity-based encryption scheme based on the Diffie-Hellman assumption remained unresolved for more than 15 years. Evidence supporting this lack of success was provided by Papakonstantinou, Rackoff and Vahlis (ePrint '12), who ruled out the existence of generic-group identity-based encryption schemes supporting an identity space of sufficiently large polynomial size. Nevertheless, the breakthrough result of D{\"{o}}ttling and Garg (CRYPTO '17) settled this long-standing challenge via a non-generic construction.

We prove a tight impossibility result for generic-group identity-based encryption, ruling out the existence of any non-trivial construction: We show that any scheme whose public parameters include $n_{\sf pp}$ group elements may support at most $n_{\sf pp}$ identities. This threshold is trivially met by any generic-group public-key encryption scheme whose public keys consist of a single group element (e.g., ElGamal encryption).

In the context of algebraic constructions, generic realizations are often both conceptually simpler and more efficient than non-generic ones. Thus, identifying exact thresholds for the limitations of generic groups is not only of theoretical significance but may in fact have practical implications when considering concrete security parameters.
Expand

03 June 2021

Antonin Leroux
ePrint Report ePrint Report
In this paper, we introduce a new method to prove the knowledge of an isogeny of given degree between two supersingular elliptic curves. Our approach can be extended to verify the evaluation of the secret isogeny on some points of the domain. The main advantage of this new proof of knowledge is its compactness which is orders of magnitude better than existing proofs of isogeny knowledge. The principle of our method is to reveal some well-chosen endomorphisms and does not constitute a zero-knowledge proof. However, when the degree is a large prime, we can introduce a new hardness assumption upon which we build the first verifiable random function (VRF) based on isogenies. Our protocol can be seen as a generalization of the BLS-style classical construction from elliptic curves and achieves one-time pseudo-randomness in the random oracle model. We propose concrete parameters for this new scheme which reach post-quantum NIST-1 level of security. Our VRF has an overall cost (proof size, key size and output size) of roughly $1$KB, which is shorter than all the other post-quantum instantiations based on lattices. In the process, we also develop several algorithmic tools to solve norm equations over quaternion orders that may be of independent interest.
Expand
Shumo Chu, Yu Xia, Zhenfei Zhang
ePrint Report ePrint Report
We propose Manta, a plug and play private DeFi stack that consists of MantaDAP, a multi-asset decentralized anonymous payment scheme and MantaDAX, an automated market maker(AMM) based decentralized anonymous exchange scheme. Compared with existing privacy preserving cryptocurrencies such as Zcash and Monero,Manta supports multiple base assets and allows the privatized assets to be exchanged anonymously via MantaDAX. We think this is a major step forward towards building a privacy preserving DeFi stack. Thanks to the efficiency of modern NIZKs (non-interactive zero-knowledge proof systems) and our carefully crafted design,Manta is efficient: our benchmarks reports a 15 second, off-line zero-knowledge proof (ZKP) generation time, and a 6 millisecond, on-line proof verification time.
Expand
Dimitris Karakostas, Aggelos Kiayias, Mario Larangeira
ePrint Report ePrint Report
Proof-of-Stake (PoS) distributed ledgers are the most common alternative to Bitcoin’s Proof-of-Work (PoW) paradigm, replacing the hardware dependency with stake, i.e., assets that a party controls. Similar to PoW’s mining pools, PoS’s stake pools, i.e., collaborative entities comprising of multiple stakeholders, allow a party to earn rewards more regularly, compared to participating on an individual basis. However, stake pools tend to increase centralization, since they are typically managed by a single party that acts on behalf of the pool’s members. In this work we propose Conclave, a formal design of a Collective Stake Pool, i.e., a decentralized pool with no single point of authority. We formalize Conclave as an ideal functionality and implement it as a distributed protocol, based on standard cryptographic primitives. Among Conclave’s building blocks is a weighted threshold signature scheme (WTSS); to that end, we define a WTSS ideal functionality — which might be of independent interest — and propose two constructions based on threshold ECDSA, which enable (1) fast trustless setup and (2) identifiable aborts.
Expand
Keita Xagawa
ePrint Report ePrint Report
This short note shows that NTRU in NIST PQC Round~3 finalist is anonymous in the QROM if the underlying NTRU PKE is strongly disjoint-simulatable and a hybrid PKE scheme constructed from NTRU as KEM and appropriate DEM is anonymous and robust.

This solves the the open problem to investigate anonymity and robustness of NTRU posed by Grubbs, Maram, and Paterson (Cryptography ePrint Archive 2021/708).}
Expand
Keita Xagawa
ePrint Report ePrint Report
The Boneh-Katz transformation (CT-RSA 2005) converts a selectively-secure identity/tag-based encryption scheme into a public-key encryption scheme secure against chosen-ciphertext attacks. We show that if the underlying primitives are pseudorandom, then the public-key encryption scheme obtained by the Boneh-Katz transformation is also pseudorandom. A similar result holds for oblivious sampleability (Canetti and Fischlin (CRYPTO 2001)).

As applications, we can construct

* pseudorandom and obliviously-samplable public-key encryption schemes from lattices and codes,

* universally-composable non-interactive bit-commitment from lattices,

* public-key steganography which is steganographically secure against adaptive chosen-covertext attacks and steganographic key-exchange from lattices and codes,

* anonymous authenticated key exchange from lattices and codes,

* public-key encryption secure against simulation-based, selective-opening chosen-ciphertext attacks from lattices and codes.
Expand
Tomer Ashur, Efrat Cohen, Carmit Hazay, Avishay Yanai
ePrint Report ePrint Report
Garbled circuits are a fundamental cryptographic building block to encode Boolean circuits as a sequence of encrypted values. This sequence allows two parties to securely evaluate the circuit, e.g., without revealing their respective inputs. At the heart of any garbling scheme lies a randomized algorithm projecting the plain values into a larger domain. Emerging from a large body of work, the common paradigm meets two implicit properties: the circuit is garbled progressively gate-wise; and all underlying algorithms are linear. In this setting, the communication complexity is measured in the number of sent ciphertexts and shown to be optimal with a scheme sending two ciphertexts per AND gate and no ciphertexts per XOR gate (Zahur, Rosulek and Evans, Eurocrypt'15).

We revisit the common paradigm and extend the seminal work of Bellare, Hoang, and Rogaway from CCS 2012 to present for the first time an abstraction of the garbling algorithm itself. This abstraction highlights how Yao's work (Yao, FOCS'86) and all its optimizations focused on improving just one aspect of the garbling. We then discuss how improving the other aspects could provide new ways to overcome the limitations of existing schemes. As a proof of concept we present a non-bijective scheme avoiding Zahur et al.'s bound, achieving a communication complexity of a single data item which is not a ciphertext.
Expand
Nico Döttling, Dominik Hartmann, Dennis Hofheinz, Eike Kiltz, Sven Schäge, Bogdan Ursu
ePrint Report ePrint Report
The existence of one-way functions implies secure digital signatures, but not public-key encryption (at least in a black-box setting). Somewhat surprisingly, though, efficient public-key encryption schemes appear to be much easier to construct from concrete algebraic assumptions (such as the factoring of Diffie-Hellman-like assumptions) than efficient digital signature schemes. In this work, we provide one reason for this apparent difficulty to construct efficient signature schemes.

Specifically, we prove that a wide range of algebraic signature schemes (in which verification essentially checks a number of linear equations over a group) fall to conceptually surprisingly simple linear algebra attacks. In fact, we prove that in an algebraic signature scheme, sufficiently many signatures can be linearly combined to a signature of a fresh message. We present attacks both in known-order and hidden-order groups (although in hidden-order settings, we have to restrict our definition of algebraic signatures a little). More explicitly, we show: - the insecurity of all signature schemes in Maurer's generic group model (in pairing-free groups), as long as the signature schemes do not rely on other cryptographic assumptions, such as hash functions. - the insecurity of a natural class of signatures in hidden-order groups, where verification consists of linear equations over group elements.

We believe that this highlights the crucial role of public verifiability in digital signature schemes. Namely, while public-key encryption schemes do not require any publicly verifiable structure on ciphertexts, it is exactly this structure on signatures that invites attacks like ours and makes it hard to construct efficient signatures.
Expand
Akiko Inoue, Kazuhiko Minematsu
ePrint Report ePrint Report
GIFT-COFB is a finalist of NIST Lightweight cryptography project that aims at standardizing authenticated encryption schemes for constrained devices. It is a block cipher-based scheme and comes with a provable security result. This paper studies the tightness of the provable security bounds of GIFT-COFB, which roughly tells that, if instantiated by a secure $n$-bit block cipher, we need $2^{n/2}$ encrypted blocks or $2^{n/2}/n$ decryption queries to break the scheme. This paper shows that the former condition is indeed tight, by presenting forgery attacks that work with $2^{n/2}$ encrypted blocks with single decryption query. This fills the missing spot of previous attacks presented by Khairallah, and confirms the tightness of the security bounds with respect to encryption. We remark that our attacks work independent of the underlying block cipher.
Expand
Nuttapong Attrapadung, Koki Hamada, Dai Ikarashi, Ryo Kikuchi, Takahiro Matsuda, Ibuki Mishina, Hiraku Morita, Jacob C. N. Schuldt
ePrint Report ePrint Report
Machine Learning (ML) algorithms, especially deep neural networks (DNN), have proven themselves to be extremely useful tools for data analysis, and are increasingly being deployed in systems operating on sensitive data, such as recommendation systems, banking fraud detection, and healthcare systems. This underscores the need for privacy-preserving ML (PPML) systems, and has inspired a line of research into how such systems can be constructed efficiently. We contribute to this line of research by proposing a framework that allows efficient and secure evaluation of full-fledged state-of-the-art ML algorithms via secure multi-party computation (MPC). This is in contrast to most prior works on PPML, which require advanced ML algorithms to be substituted with approximated variants that are ``MPC-friendly'', before MPC techniques are applied to obtain a PPML algorithm. A drawback of the latter approach is that it requires careful fine-tuning of the combined ML and MPC algorithms, and might lead to less efficient algorithms or inferior quality ML (such as lower prediction accuracy). This is an issue for secure training of DNNs in particular, as this involves several arithmetic algorithms that are thought to be ``MPC-unfriendly'', namely, integer division, exponentiation, inversion, and square root extraction.

In this work, we propose secure and efficient protocols for the above seemingly MPC-unfriendly computations (but which are essential to DNN). Our protocols are three-party protocols in the honest-majority setting, and we propose both passively secure and actively secure with abort variants. A notable feature of our protocols is that they simultaneously provide high accuracy and efficiency. This framework enables us to efficiently and securely compute modern ML algorithms such as Adam (Adaptive moment estimation) and the softmax function ``as is'', without resorting to approximations. As a result, we obtain secure DNN training that outperforms state-of-the-art three-party systems; our \textit{full} training is up to $6.7$ times faster than just the \textit{online} phase of the recently proposed FALCON (Wagh et al. at PETS'21) on the standard benchmark network for secure training of DNNs. To further demonstrate the scalability of our protocols, we perform measurements on real-world DNNs, AlexNet and VGG16, which are complex networks containing millions of parameters. The performance of our framework for these networks is up to a factor of about $12\sim 14$ faster for AlexNet and $46\sim 48$ faster for VGG16 to achieve an accuracy of $70\%$ and $75\%$, respectively, when compared to FALCON.
Expand
Diego F. Aranha, Sebastian Berndt, Thomas Eisenbarth, Okan Seker, Akira Takahashi, Luca Wilke, Greg Zaverucha
ePrint Report ePrint Report
We study masking countermeasures for side-channel attacks against signature schemes constructed from the MPC-in-the-head paradigm, specifically when the MPC protocol uses preprocessing. This class of signature schemes includes Picnic, an alternate candidate in the third round of the NIST post-quantum standardization project. The only previously known approach to masking MPC-in-the-head signatures suffers from interoperability issues and increased signature sizes. Further, we present a new attack to demonstrate that known countermeasures are not sufficient when the MPC protocol uses a preprocessing phase, as in Picnic3.

We overcome these challenges by showing how to mask the underlying zero-knowledge proof system due to Katz–Kolesnikov–Wang (CCS 2018) for any masking order, and by formally proving that our approach meets the standard security notions of non-interference for masking countermeasures. As a case study, we apply our masking technique to Picnic. We then implement different masked versions of Picnic signing providing first order protection for the ARM Cortex M4 platform, and quantify the overhead of these different masking approaches. We carefully analyze the side-channel risk of hashing operations, and give optimizations that reduce the CPU cost of protecting hashing in Picnic by a factor of five. The performance penalties of the masking countermeasures ranged from 1.8 to 5.5, depending on the degree of masking applied to hash function invocations.
Expand
◄ Previous Next ►