IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
06 October 2020
Jonathan Takeshita, Dayane Reis, Ting Gong, Michael Niemier, X. Sharon Hu, Taeho Jung
ePrint Report
Somewhat Homomorphic Encryption (SHE) allows arbitrary computation with nite multiplicative depths to be performed on encrypted data, but its overhead is high due to memory transfer incurred by large ciphertexts. Recent research has recognized the shortcomings of general-purpose computing for high-performance SHE, and has begun to pioneer the use of hardware-based SHE acceleration with hardware including FPGAs, GPUs, and Compute-Enabled RAM (CE-RAM). CERAM is well-suited for SHE, as it is not limited by the separation between memory and processing that bottlenecks other hardware. Further, CE-RAM does not move data between dierent processing elements. Recent research has shown the high eectiveness of CE-RAM for SHE as compared to highly-optimized CPU and FPGA implementations. However, algorithmic optimization for the implementation on CE-RAM is underexplored. In this work, we examine the eect of existing algorithmic optimizations upon a CE-RAM implementation of the B/FV scheme, and further introduce novel optimization techniques for the Full RNS Variant of B/FV. Our experiments show speedups of up to 784x for homomorphic multiplication, 143x for decryption, and 330x for encryption against a CPU implementation. We also compare our approach to similar work in CE-RAM, FPGA, and GPU acceleration, and note general improvement over existing work. In particular, for homomorphic multiplication we see speedups of 506.5x against CE-RAM, 66.85x against FPGA, and 30.8x against GPU as compared to existing work in hardware acceleration of B/FV.
Muhammed F. Esgin, Veronika Kuchta, Amin Sakzad, Ron Steinfeld, Zhenfei Zhang, Shifeng Sun, Shumo Chu
ePrint Report
In this work, we introduce the first practical post-quantum verifiable random function (VRF) that relies on well-known (module) lattice problems, namely Module-SIS and Module-LWE. Our construction, named LB-VRF, results in a VRF value of only 84 bytes and a proof of around only 5 KB (in comparison to several MBs in earlier works), and runs in about 3 ms for evaluation and about 1 ms for verification.
In order to design a practical scheme, we need to restrict the number of VRF outputs per key pair, which makes our construction few-time. Despite this restriction, we show how our few-time LB-VRF can be used in practice and, in particular, we estimate the performance of Algorand using LB-VRF. We find that, due to the significant increase in the communication size in comparison to classical constructions, which is inherent in all existing lattice-based schemes, the throughput in LB-VRF-based consensus protocol is reduced, but remains practical. In particular, in a medium-sized network with 100 nodes, our platform records a 1.16x to 4x reduction in throughput, depending on the accompanying signature used. In the case of a large network with 500 nodes, we can still maintain at least 66 transactions per second. This is still much better than Bitcoin, which processes only about 5 transactions per second.
In order to design a practical scheme, we need to restrict the number of VRF outputs per key pair, which makes our construction few-time. Despite this restriction, we show how our few-time LB-VRF can be used in practice and, in particular, we estimate the performance of Algorand using LB-VRF. We find that, due to the significant increase in the communication size in comparison to classical constructions, which is inherent in all existing lattice-based schemes, the throughput in LB-VRF-based consensus protocol is reduced, but remains practical. In particular, in a medium-sized network with 100 nodes, our platform records a 1.16x to 4x reduction in throughput, depending on the accompanying signature used. In the case of a large network with 500 nodes, we can still maintain at least 66 transactions per second. This is still much better than Bitcoin, which processes only about 5 transactions per second.
Tatsuya Suzuki, Keita Emura, Toshihiro Ohigashi, Kazumasa Omote
ePrint Report
Most functional encryption schemes implicitly assume that inputs to decryption algorithms, i.e., secret keys and ciphertexts, are generated honestly. However, they may be tampered by malicious adversaries. Thus, verifiable functional encryption (VFE) was proposed by Badrinarayanan et al. in ASIACRYPT 2016 where anyone can publicly check the validity of secret keys and ciphertexts. They employed indistinguishability-based (IND-based) security due to an impossibility result of simulation-based (SIM-based) VFE even though SIM-based security is more desirable.
In this paper, we propose a SIM-based VFE scheme. To bypass the impossibility result, we introduce a trusted setup assumption. Although it appears to be a strong assumption, we demonstrate that it is reasonable in a hardware-based construction, e.g., Fisch et al. in ACM CCS 2017. Our construction is based on a verifiable public-key encryption scheme (Nieto et al. in SCN 2012), a signature scheme, and a secure hardware scheme, which we refer to as VFE-HW. Finally, we discuss an our implementation of VFE-HW using Intel Software Guard Extensions (Intel SGX).
Hassan Jameel Asghar, Slawomir Matelski, Josef Pieprzyk
ePrint Report
This report contains analysis and discussion on different versions of
the TopoSign (Topographic Signature) protocol proposed by Matelski.
Shingo Sato, Junji Shikata, Tsutomu Matsumoto
ePrint Report
In this paper, we comprehensively study aggregate signatures with detecting functionality, that have functionality of both keyless aggregation of multiple signatures and identifying an invalid message from the aggregate signature, in order to reduce a total amount of signature-size for lots of messages.
Our contribution is (i) to formalize strong security notions for both non-interactive and interactive protocols by taking into account related work such as fault-tolerant aggregate signatures and (non-)interactive aggregate MACs with detecting functionality (i.e., symmetric case); and (ii) to construct aggregate signatures with the functionality from group testing-protocols in a generic and comprehensive way. As instantiations, pairing-based constructions are provided.
Shingo Sato, Junji Shikata
ePrint Report
In this paper, we propose a formal security model and a construction methodology of interactive aggregate message authentication with detecting functionality (IAMD). The IAMD is an interactive aggregate MAC protocol which can identify invalid messages with a small amount of tag-size. Several aggregate MAC schemes that can specify invalid messages has been proposed so far by using non-adaptive group testing in the prior work. In this paper, we utilize adaptive group testing to construct IAMD scheme, and we show that the resulting IAMD scheme can identify invalid messages with a small amount of tag-size compared to the previous schemes.
Pedro Hecht
ePrint Report
NIST is currently conducting the 3rd round of a survey to find post-quantum class asymmetric protocols (PQC) [1]. We participated in a joint-team with a fellow researcher of the Interamerican Open University (UAI) with a Key-Exchange Protocol (KEP) called HK17 [2]. The proposal was flawed because Bernstein [3] found a weakness, which was later refined by Li [4] using a quadratic reduction of octonions and quaternions, albeit no objection about the published non-commutative protocol and the one-way trapdoor function (OWTF). This fact promoted the search for a suitable algebraic platform. HK17 had its interest because it was the only first-round offer strictly based on canonical group theory [5]. At last, we adapted the original protocol with the R-propping solution of 3-dimensional tensors [6], yielding Bernstein attack fruitless. Therefore, an El Gamal IND-CCA2 cipher security using Cao [7] arguments are at hand.
Erdem Alkim, Dean Yun-Li Cheng, Chi-Ming Marvin Chung, Hülya Evkan, Leo Wei-Lun Huang, Vincent Hwang, Ching-Lin Trista Li, Ruben Niederhagen, Cheng-Jhih Shih, Julian Wälde, Bo-Yin Yang
ePrint Report
This paper proposes two different methods
to perform NTT-based polynomial multiplication
in polynomial rings
that do not naturally support such a multiplication.
We demonstrate these methods
on the NTRU Prime key-encapsulation mechanism (KEM)
proposed by Bernstein, Chuengsatiansup, Lange, and Vredendaal,
which uses a polynomial ring
that is, by design, not amenable to use with NTT.
One of our approaches
is using Good's trick
and focuses on speed and supporting more than one parameter set with a single implementation.
The other approach
is using a mixed-radix NTT
and focuses on the use of smaller multipliers
and less memory.
On an ARM Cortex-M4 microcontroller,
we show that our three NTT-based implementations,
one based on Good's trick and two mixed-radix NTTs,
provide between
32% and 17% faster polynomial multiplication. For the parameter-set ntrulpr761,
this results in between 16% and 9% faster total operations (sum of key generation,
encapsulation, and decapsulation) and requires between 15% and 39% less memory
than the current state-of-the-art NTRU Prime implementation on this platform,
which is using Toom-Cook-based polynomial multiplication.
Steve Babbage, Alexander Maximov
ePrint Report
This short report contains results of a brief cryptanalysis of the initialisation phase of ZUC-256. We find IV differentials that persist for 26 of the 33 initialisation rounds, and Key differentials that persist for 28 of the 33 rounds.
Majid Mumtaz, Ping Luo
ePrint Report
Boneh-Durfee proposed (at Eurocrypt 1999) a polynomial time attacks on RSA small decryption exponent which exploits lattices
and sub-lattice structure to obtain an optimized bounds d < N^0.284 and d < N^0.292 respectively using lattice based Coppersmiths method. In this paper we propose a special case of Boneh-Durfees attack with respect to large private exponent (i.e. d = N^ε > e = N^α where ε and α are the private and public key exponents respectively) for some α ≤ ε, which satisfy the condition d > φ(N) − N^ε. We analyzed lattices whose basis matrices are triangular and non-triangular using large decryption
exponent and focus group attacks respectively. The core objective is to explore RSA polynomials underlying algebraic structure so that we can improve the performance of weak key attacks. In our solution, we implemented the attack and perform several experiments to show that an RSA cryptosystem successfully attacked and revealed possible weak keys which can ultimately enables an adversary to factorize the RSA modulus.
Joseph Jaeger, Stefano Tessaro
ePrint Report
This paper studies concrete security with respect to expected-time adversaries. Our first contribution is a set of generic tools to obtain tight bounds on the advantage of an adversary with expected-time guarantees. We apply these tools to derive bounds in the random-oracle and generic-group models, which we show to be tight.
As our second contribution, we use these results to derive concrete bounds on the soundness of public-coin proofs and arguments of knowledge. Under the lens of concrete security, we revisit a paradigm by Bootle at al. (EUROCRYPT '16) that proposes a general Forking Lemma for multi-round protocols which implements a rewinding strategy with expected-time guarantees. We give a tighter analysis, as well as a modular statement. We adopt this to obtain the first quantitative bounds on the soundness of Bulletproofs (Bünz et al., S&P 2018), which we instantiate with our expected-time generic-group analysis to surface inherent dependence between the concrete security and the statement to be proved.
As our second contribution, we use these results to derive concrete bounds on the soundness of public-coin proofs and arguments of knowledge. Under the lens of concrete security, we revisit a paradigm by Bootle at al. (EUROCRYPT '16) that proposes a general Forking Lemma for multi-round protocols which implements a rewinding strategy with expected-time guarantees. We give a tighter analysis, as well as a modular statement. We adopt this to obtain the first quantitative bounds on the soundness of Bulletproofs (Bünz et al., S&P 2018), which we instantiate with our expected-time generic-group analysis to surface inherent dependence between the concrete security and the statement to be proved.
Ran Canetti, Pratik Sarkar, Xiao Wang
ePrint Report
The only known non-interactive zero-knowledge (NIZK) protocol that is secure against adaptive corruption of the prover is based on that of Groth-Ostrovsky-Sahai (JACM'11) (GOS). However that protocol does not guarantee full adaptive soundness. Abe and Fehr (TCC'07) construct an adaptively sound variant of the GOS protocol under a knowledge-of-exponent assumption, but knowledge assumptions of this type are inherently incompatible with universally composable (UC) security.
We show the first NIZK which is triply adaptive: it is a UC NIZK protocol in a multi-party, multi-instance setting, with adaptive corruptions and no data erasures. Furthermore, the protocol provides full adaptive soundness. Our construction is very different than that of GOS: it is based on the recent NIZK of Canetti et al (STOC'19), and can be based on a variety of assumptions (e.g. LWE, or LPN and DDH). We also show how to get a succinct reference string assuming LWE or DDH from GOS-like techniques.
We show the first NIZK which is triply adaptive: it is a UC NIZK protocol in a multi-party, multi-instance setting, with adaptive corruptions and no data erasures. Furthermore, the protocol provides full adaptive soundness. Our construction is very different than that of GOS: it is based on the recent NIZK of Canetti et al (STOC'19), and can be based on a variety of assumptions (e.g. LWE, or LPN and DDH). We also show how to get a succinct reference string assuming LWE or DDH from GOS-like techniques.
Xiao Chen
ePrint Report
Public key encryption with keyword search (PEKS) is first introduced by Boneh et al. enabling a cloud server to search on encrypted data without leaking any information of the keyword. In almost all PEKS schemes, the privacy of trapdoor is vulnerable to inside keyword guessing attacks (KGA), i.e., the server can generate the ciphertext by its own and then run the test algorithm to guess the keyword contained in the trapdoor.
To sole this problem, Huang et al. proposed the public-key authenticated encryption with keyword search (PAEKS) achieving trapdoor privacy (TP) security, in which data sender not only encrypts the keyword but also authenticates it by using his/her secret key. Qin et al. introduced the notion of multi-ciphertext indistinguishability (MCI) security to capture outside chosen multi-ciphertext attacks, in which the adversary needs to distinguish two tuples of ciphertexts corresponding with two sets of keywords. They analysed that Huang's work cannot achieve MCI security, so they proposed an improved scheme to match both the MCI security and trapdoor privacy (TP) security. In addition, they also defined the notion of multi-trapdoor privacy (MTP) security, which requires to distinguish two tuples of trapdoors corresponding with two sets of keywords.
Unfortunately, trapdoor generation algorithms of all above works are deterministic, which means they are unable to capture the security requirement of MTP. How to achieve MTP security against inside multi-keyword guessing attacks,i.e., designing a probabilistic trapdoor generation algorithm, is still an open problem.
In this paper, we solve this problem. We initially propose two public-key authenticated encryption with keyword search schemes achieving both MCI security and MTP security simultaneously. We provide formal proof of our schemes in the random oracle model.
Unfortunately, trapdoor generation algorithms of all above works are deterministic, which means they are unable to capture the security requirement of MTP. How to achieve MTP security against inside multi-keyword guessing attacks,i.e., designing a probabilistic trapdoor generation algorithm, is still an open problem.
In this paper, we solve this problem. We initially propose two public-key authenticated encryption with keyword search schemes achieving both MCI security and MTP security simultaneously. We provide formal proof of our schemes in the random oracle model.
Yusuke Yoshida, Fuyuki Kitagawa, Keita Xagawa, Keisuke Tanaka
ePrint Report
Non-committing encryption (NCE) introduced by Canetti et al. (STOC '96) is a central tool to achieve multi-party computation protocols secure in the adaptive setting. Recently, Yoshida et al. (ASIACRYPT '19) proposed an NCE scheme based on the hardness of the DDH problem, which has ciphertext expansion $\mathcal{O}(\log\lambda)$ and public-key expansion $\mathcal{O}(\lambda^2)$.
In this work, we improve their result and propose a methodology to construct an NCE scheme that achieves constant ciphertext expansion.Our methodology can be instantiated from the DDH assumption and the LWE assumption. When instantiated from the LWE assumption, the public-key expansion is $\lambda\cdot\mathsf{poly}(\log\lambda)$. They are the first NCE schemes satisfying constant ciphertext expansion without using iO or common reference strings.
Along the way, we define a weak notion of NCE, which satisfies only weak forms of correctness and security.We show how to amplify such a weak NCE scheme into a full-fledged one using wiretap codes with a new security property.
Christian Badertscher, Ran Canetti, Julia Hesse, Björn Tackmann, Vassilis Zikas
ePrint Report
The Global and Externalized UC frameworks [Canetti-Dodis-Pass-Walfish, TCC 07] extend the plain UC framework to additionally handle protocols that use a ``global setup'', namely a mechanism that is also used by entities outside the protocol. These frameworks have broad applicability: Examples include public-key infrastructures, common reference strings, shared synchronization mechanisms, global blockchains, or even abstractions such as the random oracle. However, the need to work in a specialized framework has been a source of confusion, incompatibility, and an impediment to broader use.
We show how security in the presence of a global setup can be captured within the plain UC framework, thus significantly simplifying the treatment. This is done as follows:
- We extend UC-emulation to the case where both the emulating protocol $\pi$ and the emulated protocol $\phi$ make subroutine calls to protocol $\gamma$ that is accessible also outside $\pi$ and $\phi$. As usual, this notion considers only a single instance of $\phi$ or $\pi$ (alongside $\gamma$).
- We extend the UC theorem to hold even with respect to the new notion of UC emulation. That is, we show that if $\pi$ UC-emulates $\phi$ in the presence of $\gamma$, then $\rho^{\phi\rightarrow\pi}$ UC-emulates $\rho$ for any protocol $\rho$, even when $\rho$ uses $\gamma$ directly, and in addition calls many instances of $\phi$, all of which use the same instance of $\gamma$. We prove this extension using the existing UC theorem as a black box, thus further simplifying the treatment.
We also exemplify how our treatment can be used to streamline, within the plain UC model, proofs of security of systems that involve global set-up, thus providing greater simplicity and flexibility.
We show how security in the presence of a global setup can be captured within the plain UC framework, thus significantly simplifying the treatment. This is done as follows:
- We extend UC-emulation to the case where both the emulating protocol $\pi$ and the emulated protocol $\phi$ make subroutine calls to protocol $\gamma$ that is accessible also outside $\pi$ and $\phi$. As usual, this notion considers only a single instance of $\phi$ or $\pi$ (alongside $\gamma$).
- We extend the UC theorem to hold even with respect to the new notion of UC emulation. That is, we show that if $\pi$ UC-emulates $\phi$ in the presence of $\gamma$, then $\rho^{\phi\rightarrow\pi}$ UC-emulates $\rho$ for any protocol $\rho$, even when $\rho$ uses $\gamma$ directly, and in addition calls many instances of $\phi$, all of which use the same instance of $\gamma$. We prove this extension using the existing UC theorem as a black box, thus further simplifying the treatment.
We also exemplify how our treatment can be used to streamline, within the plain UC model, proofs of security of systems that involve global set-up, thus providing greater simplicity and flexibility.
Vitaly Kiryukhin
ePrint Report
This article describes some approaches to bounding non-minimum weight differentials (EDP) and linear hulls (ELP) in 2-round LSX-cipher. We propose a dynamic programming algorithm to solve this problem. For 2-round Kuznyechik the nontrivial upper bounds on all differentials (linear hulls) with $18$ and $19$ active Sboxes was obtained. These estimates are also holds for other differentials (linear hulls) with a larger number of active Sboxes. We obtain a similar result for 2-round Khazad. As a consequence, the exact value of the maximum expected differential (linear) probability (MEDP/MELP) was computed for this cipher.
Kamyar Mohajerani, Richard Haeussler, Rishub Nagpal, Farnoud Farahmand, Abubakr Abdulgadir, Jens-Peter Kaps, Kris Gaj
ePrint Report
Over 20 Round 2 candidates in the NIST Lightweight Cryptography (LWC) process have been implemented in hardware by groups from all over the world. In August and September 2020, all implementations compliant with the LWC Hardware API, proposed in 2019, have been submitted for FPGA benchmarking to George Mason Universitys LWC benchmarking team, who co-authored this report. The received submissions were first verified for correct functionality and compliance with the hardware API's specification. Then, formulas for the execution times in clock cycles, as a function of input sizes, have been confirmed using behavioral simulation. If needed, appropriate corrections were introduced in collaboration with the submission teams. The compatibility of all implementations with FPGA toolsets from three major vendors, Xilinx, Intel, and Lattice Semiconductor was verified. Optimized values of the maximum clock frequency and resource utilization metrics, such as the number of look-up tables (LUTs) and flip-flops (FFs), were obtained by running optimization tools, such as Minerva, ATHENa, and Xeda. The raw post-place and route results were then converted into values of the corresponding throughputs for long, medium-size, and short inputs. The overhead of modifying vs. reusing a key between two consecutive inputs was quantified. The results were presented in the form of easy to interpret graphs and tables, demonstrating the relative performance of all investigated algorithms. For a few submissions, the results of the initial design-space exploration were illustrated as well. An effort was made to make the entire process as transparent as possible and results easily reproducible by other groups.
Andrey Sobol
ePrint Report
This paper will contain the analysis of frontrunning potential on Quipuswap - a decentralized exchange with automated marketmaking in the context of the Proof Of Stake family consensus algo over Tezos protocol, and a proposal to boost the frontrunning resistance of the protocol via the implementation of commit reveal scheme.
Benjamin Kuykendall, Mark Zhandry
ePrint Report
Witness hiding proofs require that the verifier cannot find a witness after seeing a proof. The exact round complexity needed for witness hiding proofs has so far remained an open question. In this work, we provide compelling evidence that witness hiding proofs are achievable non-interactively for wide classes of languages. We use non-interactive witness indistinguishable proofs as the basis for all of our protocols. We give four schemes in different settings under different assumptions:
A universal non-interactive proof that is witness hiding as long as any proof system, possibly an inefficient and/or non-uniform scheme, is witness hiding, has a known bound on verifier runtime, and has short proofs of soundness.
A non-uniform non-interactive protocol justified under a worst-case complexity assumption that is witness hiding and efficient, but may not have short proofs of soundness.
A new security analysis of the two-message argument of Pass [Crypto 2003], showing witness hiding for any non-uniformly hard distribution. We propose a heuristic approach to removing the first message, yielding a non-interactive argument.
A witness hiding non-interactive proof system for languages with unique witnesses, assuming the non-existence of a weak form of witness encryption for any language in NP ∩ coNP.
Marc Fischlin, Felix Rohrbach
ePrint Report
Non-interactive zero-knowledge proofs or arguments allow a prover to show validity of a statement without further interaction. For non-trivial statements such protocols require a setup assumption in form of a common random or reference string (CRS). Generally, the CRS can only be used for one statement (single-theorem zero-knowledge) such that a fresh CRS would need to be generated for each proof. Fortunately, Feige, Lapidot and Shamir (FOCS 1990) presented a transformation for any non-interactive zero-knowledge proof system that allows the CRS to be reused any polynomial number of times (multi-theorem zero-knowledge). This FLS transformation, however, is only known to work for either computational zero-knowledge or requires a structured, non-uniform common reference string.
In this paper we present FLS-like transformations that work for non-interactive statistical zero-knowledge arguments in the common random string model. They allow to go from single-theorem to multi-theorem zero-knowledge and also preserve soundness, for both properties in the adaptive and non-adaptive case. Our first transformation is based on the general assumption that one-way permutations exist, while our second transformation uses lattice-based assumptions. Additionally, we define different possible soundness notions for non-interactive arguments and discuss their relationships.
In this paper we present FLS-like transformations that work for non-interactive statistical zero-knowledge arguments in the common random string model. They allow to go from single-theorem to multi-theorem zero-knowledge and also preserve soundness, for both properties in the adaptive and non-adaptive case. Our first transformation is based on the general assumption that one-way permutations exist, while our second transformation uses lattice-based assumptions. Additionally, we define different possible soundness notions for non-interactive arguments and discuss their relationships.