IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
29 October 2021
Abubakr Abdulgadir, Kamyar Mohajerani, Viet Ba Dang, Jens-Peter Kaps, Kris Gaj
The field of post-quantum cryptography aims to develop and analyze algorithms that can withstand classical and quantum cryptanalysis. The NIST PQC standardization process, now in its third round, specifies ease of protection against side-channel analysis as an important selection criterion. In this work, we develop and validate a masked hardware implementation of Saber key encapsulation mechanism, a third-round NIST PQC finalist. We first design a baseline lightweight hardware architecture of Saber and then apply side-channel countermeasures. Our protected hardware implementation is significantly faster than previously reported protected software and software/hardware co-design implementations. Additionally, applying side-channel countermeasures to our baseline design incurs approximately 2.9x and 1.4x penalty in terms of the number of LUTs and latency, respectively, in modern FPGAs.
Luke Beckwith, Duc Tri Nguyen, Kris Gaj
Many currently deployed public-key cryptosystems are based on the difficulty of the discrete logarithm and integer factorization problems. However, given an adequately sized quantum computer, these problems can be solved in polynomial time as a function of the key size. Due to the future threat of quantum computing to current cryptographic standards, alternative algorithms that remain secure under quantum computing are being evaluated for future use. One such algorithm is CRYSTALS-Dilithium, a lattice-based digital signature scheme, which is a finalist in the NIST Post Quantum Cryptography (PQC) competition. As a part of this evaluation, high-performance implementations of these algorithms must be investigated. This work presents a high-performance implementation of CRYSTALS-Dilithium targeting FPGAs. In particular, we present a design that achieves the best latency for an FPGA implementation to date. We also compare our results with the most-relevant previous work on hardware implementations of NIST Round 3 post-quantum digital signature candidates.
Hyeonbum Lee, Jae Hong Seo
We propose two zero-knowledge arguments for arithmetic circuits with fan-in 2 gates in the uniform random string model. Our first protocol features $O(\sqrt{\log_2 N})$ communication and round complexities and $O(N)$ computational complexity for the verifier, where $N$ is the size of the circuit. Our second protocol features $O(\log_2N)$ communication and $O(\sqrt{N})$ computational complexity for the verifier. We prove the soundness of our arguments under the discrete logarithm assumption or the double pairing assumption, which is at least as reliable as the decisional Diffie-Hellman assumption.
The main ingredient of our arguments is two different generalizations of B\"unz et al.'s Bulletproofs inner-product argument (IEEE S\&P 2018) that convinces a verifier of knowledge of two vectors satisfying an inner-product relation. For a protocol with sublogarithmic communication, we devise a novel method to aggregate multiple arguments for bilinear operations such as multi-exponentiations, which is essential for reducing communication overheads. For a protocol with a sublinear verifier, we develop a generalization of the discrete logarithm relation assumption, which is essential for reducing verification overhead while keeping the soundness proof solely relying on the discrete logarithm assumption. These techniques are of independent interest.
Xianrui Qin, Cailing Cai, Tsz Hon Yuen
In this paper, we give the first formal security analysis on the one-more unforgeability of blind ECDSA.
We start with giving a general attack on blind ECDSA, which is similar to the ROS attack on the blind Schnorr signature. We formulate the ECDSA-ROS problem to capture this attack.
Next, we give a generic construction of blind ECDSA based on an additive homomorphic encryption and a corresponding zero-knowledge proof. Our concrete instantiation is about 40 times more bandwidth efficient than the blind ECDSA in AsiaCCS 2019.
After that, we give the first formal proof of one-more unforgeability for blind ECDSA, under a new model called algebraic bijective random oracle. The security of our generic blind ECDSA relies on the hardness of a discrete logarithm-based interactive assumption and an assumption of the underlying elliptic curve.
Finally, we analyze the hardness of the ECDSA-ROS problem in the algebraic bijective random oracle model.
Next, we give a generic construction of blind ECDSA based on an additive homomorphic encryption and a corresponding zero-knowledge proof. Our concrete instantiation is about 40 times more bandwidth efficient than the blind ECDSA in AsiaCCS 2019.
After that, we give the first formal proof of one-more unforgeability for blind ECDSA, under a new model called algebraic bijective random oracle. The security of our generic blind ECDSA relies on the hardness of a discrete logarithm-based interactive assumption and an assumption of the underlying elliptic curve.
Finally, we analyze the hardness of the ECDSA-ROS problem in the algebraic bijective random oracle model.
27 October 2021
Akash Shah, Nishanth Chandran, Mesfin Dema, Divya Gupta, Arun Gururajan, Huan Yu
Secure inference allows a server holding a machine learning (ML) inference algorithm with private weights, and a client with a private input, to obtain the output of the inference algorithm, without revealing their respective private inputs to one another. While this problem has received plenty of attention, existing systems are not applicable to a large class of ML algorithms (such as in the domain of Natural Language Processing) that perform featurization as their first step. In this work, we address this gap and make the following contributions:
1. We initiate the formal study of secure featurization and its use in conjunction with secure inference protocols. 2. We build secure featurization protocols in the one/two/three-server settings that provide a tradeoff between security and efficiency. 3. Finally, we apply our algorithms in the context of secure phishing detection and evaluate our end-to-end protocol on models that are commonly used for phishing detection.
1. We initiate the formal study of secure featurization and its use in conjunction with secure inference protocols. 2. We build secure featurization protocols in the one/two/three-server settings that provide a tradeoff between security and efficiency. 3. Finally, we apply our algorithms in the context of secure phishing detection and evaluate our end-to-end protocol on models that are commonly used for phishing detection.
Sebastian Paul, Yulia Kuzovkova, Norman Lahr, Ruben Niederhagen
Large-scale quantum computers will be able to efficiently solve the underlying mathematical problems of widely deployed public key cryptosystems in the near future. This threat has sparked increased interest in the field of Post-Quantum Cryptography (PQC) and standardization bodies like NIST, IETF, and ETSI are in the process of standardizing PQC schemes as a new generation of cryptography. This raises the question of how to ensure a fast, reliable, and secure transition to upcoming PQC standards in today’s highly interconnected world.
In this work, we propose and investigate a migration strategy towards post-quantum (PQ) authentication for the network protocol Transport Layer Security (TLS). Our strategy is based on the concept of “mixed certificate chains” which use different signature algorithms within the same certificate chain. In order to demonstrate the feasibility of our migration strategy we combine the well-studied and trusted hash-based signature schemes SPHINCS+ and XMSS with elliptic curve cryptography first and subsequently with lattice-based PQC signature schemes (CRYSTALS-Dilithium and Falcon). Furthermore, we combine authentication based on mixed certificate chains with the lattice-based key encapsulation mechanism (KEM) CRYSTALS-Kyber as representative for PQC KEMs to evaluate a fully post-quantum and mutually authenticated TLS 1.3 handshake.
Our results show that mixed certificate chains containing hash-based signature schemes only at the root certificate authority level lead to feasible connection establishment times despite the increase in communication size. By analyzing code size and peak memory usage of our client and server programs we further demonstrate the suitability of our migration strategy even for embedded devices.
In this work, we propose and investigate a migration strategy towards post-quantum (PQ) authentication for the network protocol Transport Layer Security (TLS). Our strategy is based on the concept of “mixed certificate chains” which use different signature algorithms within the same certificate chain. In order to demonstrate the feasibility of our migration strategy we combine the well-studied and trusted hash-based signature schemes SPHINCS+ and XMSS with elliptic curve cryptography first and subsequently with lattice-based PQC signature schemes (CRYSTALS-Dilithium and Falcon). Furthermore, we combine authentication based on mixed certificate chains with the lattice-based key encapsulation mechanism (KEM) CRYSTALS-Kyber as representative for PQC KEMs to evaluate a fully post-quantum and mutually authenticated TLS 1.3 handshake.
Our results show that mixed certificate chains containing hash-based signature schemes only at the root certificate authority level lead to feasible connection establishment times despite the increase in communication size. By analyzing code size and peak memory usage of our client and server programs we further demonstrate the suitability of our migration strategy even for embedded devices.
Dmitrii Koshelev
This paper continues author's previous ones about compression of points on elliptic curves $E_b\!: y^2 = x^3 + b$ (with $j$-invariant $0$) over a finite field $\mathbb{F}_{\!q}$. More precisely, we show in detail how any two (resp. three) points from $E_b(\mathbb{F}_{\!q})$ can be quickly compressed to two (resp. three) elements of $\mathbb{F}_{\!q}$ (apart from a few auxiliary bits) in such a way that the corresponding decompression stage requires to extract only one cubic (resp. sextic) root in $\mathbb{F}_{\!q}$ (with several multiplications and without inversions). As a result, for many $q$ occurring in practice the new compression-decompression methods are more efficient than the classical one with the two (resp. three) $x$ or $y$ coordinates of the points, which extracts two (resp. three) roots in $\mathbb{F}_{\!q}$. We explain why the new methods are useful in the context of modern real-world pairing-based protocols. As a by-product, when $q \equiv 2 \ (\mathrm{mod} \ 3)$ (in particular, $E_b$ is supersingular), we obtain a two-dimensional analogue of Boneh--Franklin's encoding, that is a way to sample two \grqq independent'' $\mathbb{F}_{\!q}$-points on $E_b$ at the cost of one cubic root in $\mathbb{F}_{\!q}$. Finally, we comment on the case of four and more points from $E_b(\mathbb{F}_{\!q})$.
Lukas Aumayr, Sri AravindaKrishnan Thyagarajan, Giulio Malavolta, Pedro Monero-Sanchez, Matteo Maffei
Payment channels (PC) are a promising solution to the scalability issue of cryptocurrencies, allowing users to perform the bulk of the transactions off-chain without needing to post everything on the blockchain. Many PC proposals however, suffer from a severe limitation: Both parties need to constantly monitor the blockchain to ensure that the other party did not post an outdated transaction. If this event happens, the honest party needs to react promptly and engage in a punishment procedure. This means that prolonged absence periods (e.g., due to a power outage) may be exploited by malicious users. As a mitigation, the community has introduced watchtowers, a third-party monitoring the blockchain on behalf of off-line users. Unfortunately, watchtowers are either trusted, which is critical from a security perspective, or they have to lock a certain amount of coins, called collateral, for each monitored PC in order to be held accountable, which is financially infeasible for a large network.
We present Sleepy Channels, the first bi-directional PC protocol without watchtowers (or any other third party) that supports an unbounded number of payments and does not require parties to be persistently online. The key idea is to confine the period in which PC updates can be validated on-chain to a short, pre-determined time window, which is where the PC parties have to be online. This behavior is incentivized by letting the parties lock a collateral in the PC, which can be adjusted depending on their mutual trust and which they get back much sooner if they are online during this time window. Our protocol is compatible with any blockchain that is capable of verifying digital signatures (e.g., Bitcoin), as shown by our proof of concept. Moreover, Sleepy Channels impose a communication and computation overhead similar to state-of-the-art PC protocols while removing watchtower's collateral and fees for the monitoring service.
We present Sleepy Channels, the first bi-directional PC protocol without watchtowers (or any other third party) that supports an unbounded number of payments and does not require parties to be persistently online. The key idea is to confine the period in which PC updates can be validated on-chain to a short, pre-determined time window, which is where the PC parties have to be online. This behavior is incentivized by letting the parties lock a collateral in the PC, which can be adjusted depending on their mutual trust and which they get back much sooner if they are online during this time window. Our protocol is compatible with any blockchain that is capable of verifying digital signatures (e.g., Bitcoin), as shown by our proof of concept. Moreover, Sleepy Channels impose a communication and computation overhead similar to state-of-the-art PC protocols while removing watchtower's collateral and fees for the monitoring service.
Bo-Yuan Peng, Adrian Marotzke, Ming-Han Tsai, Bo-Yin Yang, Ho-Lin Chen
We present a novel full hardware implementation of Streamlined NTRU Prime, with two variants: A high-speed, high-area implementation, and a slower, low-area implementation. We introduce several new techniques that improve performance, including a batch inversion for key generation, a high-speed schoolbook polynomial multiplier, an NTT polynomial multiplier combined with a CRT map, a new DSP-free modular reduction method, a high-speed radix sorting module, and new en- and decoders. With the high-speed design, we achieve the to-date fastest speeds for Streamlined NTRU Prime, with speeds of 5007, 10989 and 64026 cycles for encapsulation, decapsulation, and key generation respectively, while running at 285 MHz on a Xilinx Zynq Ultrascale+. The entire design uses 40060 LUT, 26384 flip-flops, 36.5 Bram and 31 DSP.
Karl Wüst, Kari Kostiainen, Srdjan Capkun
Due to the popularity of blockchain-based cryptocurrencies, the increasing digitalization of payments, and the constantly reducing role of cash in society, central banks have shown an increased interest in deploying central bank digital currencies (CBDCs) that could serve as a replacement of cash. While most recent research on CBDCs focuses on blockchain technology, it is not clear that this choice of technology provides the optimal solution. In particular, the centralized trust model of a CBDC offers opportunities for different designs.
In this paper, we depart from blockchain designs and instead build on ideas from traditional e-cash schemes. We propose a new style of building digital currencies that combines the transaction processing model of e-cash with the account model of managing funds that is commonly used in blockchain solutions. We argue that such a style of building digital currencies is especially well-suited to CBDCs.
We also design the first such digital currency system, called Platypus, that provides strong privacy, massive scalability, and expressive but simple regulation, which are all critical features for a CBDC. Platypus achieves these properties by adapting techniques similar to those used in anonymous blockchain cryptocurrencies like Zcash and applying them to the e-cash context.
Yupu Hu, Jun Liu, Baocang Wang, Xingting Dong, Yanbin Pan
Functional encryption (FE) is an advanced topic in the research of cryptography, and the Agr17 FE scheme is one of the major FE schemes. It took the BGG+14 attribute-based encryption (ABE) scheme as a bottom structure, which was upgraded into a `partially hiding predicate encryption' (PHPE) scheme and combined with a fully homomorphic encryption (FHE) scheme. However, there is a remaining problem, the implementation of the modulus reduction, in the Agr17 FE scheme. First, a modulus reduction is necessary for the polynomial-time computability of the scheme. Second, the detailed steps of the modulus reduction were absent in the scheme (including its conference version and full version). Instead, the authors only pointed out several reference works. The author's meaning seemed to be that the modulus reduction of the Agr17 FE scheme can be obtained by directly using or simply generalizing these reference works. Third, these reference works only described various modulus reductions of FHE schemes, without the hint of how to generalize them into the modulus reduction of FE schemes. Finally, any modulus reduction of FHE can not be simply generalized into the modulus reduction of the Agr17 FE scheme due to the following two facts: (1) The Agr17 FE scheme has two moduli, which are the modulus of the FHE ciphertext and of the ABE ciphertext, both are originally superpolynomial in size for processing $P/poly$ functions. (2) Both moduli need to be scaled down to polynomial size, and both of them need to be reduced to the same new modulus, otherwise, the correctness of the scheme will fail.
In this paper, we demonstrate that the Agr17 FE scheme is $P/poly$ invalid. More specifically, we show that, when processing $P/poly$ functions, the Agr17 FE scheme cannot be implemented again after its modulus reduction. To show the soundness of our demonstration, we present the statements in two stages. At the first stage, we show that the modulus reduction of the Agr17 FE scheme should be a double modulus reduction, which includes two modulus reductions for the FHE ciphertext and ABE ciphertext, respectively. This double modulus reduction has the following three key points: (1) The modulus reduction for the FHE ciphertext should be seen as a series of Boolean operations, and converted into `attribute quasi-homomorphic operations'. (2) The modulus reduction for the ABE ciphertext is a learning-with-errors (LWE) -based modulus reduction, which is an ordinary modulus reduction. (3) The two modulus reductions should obtain the same new modulus, otherwise, the scheme would not be implemented again. At the second stage, we show that the modulus reduction for the ABE ciphertext will destroy the structure of ABE so that the subsequent decryption would not be executed. The reason lies in that the decryption of ABE is an LWE decryption with conditions rather than an ordinary LWE decryption, and the modulus reduction will destroy the conditions of decryption. Besides, to show such invalidity cannot be easily crossed by revising the scheme, we design a `natural' revised version of the Agr17 scheme. The key point is to change the small modulus inner product into an arithmetic inner product, which can be obtained by the modulus inner product of the ABE ciphertext. The revised scheme is valid, i.e., the decryption can be implemented correctly. However, the revised scheme is insecure because the decryptor knows much more secret information, and hence the scheme can be broken by collusion attacks with much less cost.
In this paper, we demonstrate that the Agr17 FE scheme is $P/poly$ invalid. More specifically, we show that, when processing $P/poly$ functions, the Agr17 FE scheme cannot be implemented again after its modulus reduction. To show the soundness of our demonstration, we present the statements in two stages. At the first stage, we show that the modulus reduction of the Agr17 FE scheme should be a double modulus reduction, which includes two modulus reductions for the FHE ciphertext and ABE ciphertext, respectively. This double modulus reduction has the following three key points: (1) The modulus reduction for the FHE ciphertext should be seen as a series of Boolean operations, and converted into `attribute quasi-homomorphic operations'. (2) The modulus reduction for the ABE ciphertext is a learning-with-errors (LWE) -based modulus reduction, which is an ordinary modulus reduction. (3) The two modulus reductions should obtain the same new modulus, otherwise, the scheme would not be implemented again. At the second stage, we show that the modulus reduction for the ABE ciphertext will destroy the structure of ABE so that the subsequent decryption would not be executed. The reason lies in that the decryption of ABE is an LWE decryption with conditions rather than an ordinary LWE decryption, and the modulus reduction will destroy the conditions of decryption. Besides, to show such invalidity cannot be easily crossed by revising the scheme, we design a `natural' revised version of the Agr17 scheme. The key point is to change the small modulus inner product into an arithmetic inner product, which can be obtained by the modulus inner product of the ABE ciphertext. The revised scheme is valid, i.e., the decryption can be implemented correctly. However, the revised scheme is insecure because the decryptor knows much more secret information, and hence the scheme can be broken by collusion attacks with much less cost.
Paul Crowley, Nathan Huckleberry, Eric Biggers
On modern processors HCTR is
one of the most efficient constructions
for building a tweakable super-pseudorandom permutation. However,
a bug in the specification and another in
Chakraborty and Nandi's security proof
invalidate the claimed security bound. We here present HCTR2,
which fixes these issues and improves the
security bound, performance and flexibility.
GitHub: https://github.com/google/hctr2
Kyoohyung Han, Dukjae Moon, Yongha Son
Circuit-based Private Set Intersection (circuit-PSI) enables two parties with input set $X$ and $Y$ to compute a function $f$ over the intersection set $X \cap Y$, without revealing any other information.
State-of-the-art protocols for circuit-PSI commonly involves a procedure that securely checks whether two input strings are equal and outputs an additive share of the equality result.
More importantly, this procedure occupies the largest portion, roughly $90\%$ computational or communication cost for circuit-PSI.
In this work, we propose {\textit{equality preserving compression}} (EPC) protocol that compresses the length of equality check targets while preserving equality using homomorphic encryption (HE) scheme, which is secure against the semi-honest adversary.
We then apply our EPC protocol to previous circuit-PSI protocols framework and implement them.
As a result, we achieve around 2x improvement on both communication and computational cost {\emph{at one stroke}} than previous results.
ZUC Design Team
ZUC-256 is a stream cipher, together with AES-256 and SNOW-V, proposed as the core primitive in future set of 3GPP confidentiality and integrity algorithms for the upcoming 5G applications which offer the 256-bit security. \\
While the original initialization scheme of ZUC-256 can work with a 256-bit key and an IV of length up to 184 bits, we describe a new initialization scheme of ZUC-256 that supports an IV of the exact 128 bits in this paper. Compared to the original initialization scheme, this new key/IV setup algorithm avoids the division of the whole key/IV byte and provides a simple and natural-looking initialization scheme for ZUC-256.
Yiping Ma, Ke Zhong, Tal Rabin, Sebastian Angel
Recent private information retrieval (PIR) schemes preprocess the database with a query-independent offline phase in order to achieve sublinear computation during a query-specific online phase. These offline/online protocols expand the set of applications that can profitably use PIR, but they make a critical assumption: that the database is immutable. In the presence of changes such as additions, deletions, or updates, existing schemes must preprocess the database from scratch, wasting prior effort. To address this, we introduce incremental preprocessing for offline/online PIR schemes, allowing the original preprocessing to continue to be used after database changes, while incurring an update cost proportional to the number of changes rather than the size of the database. We adapt two offline/online PIR schemes to use incremental preprocessing and show how it significantly improves the throughput and reduces the latency of applications where the database changes over time.
Anuj Dubey, Afzal Ahmad, Muhammad Adeel Pasha, Rosario Cammarota, Aydin Aysu
Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is largely unexplored. There is a critical need to efficiently and securely transform those defenses from cryptography such as masking to ML frameworks. Existing works, however, revealed that a straightforward adaptation of such defenses either provides partial security or leads to high area overheads. To address those limitations, this work proposes a fundamentally new direction to construct neural networks that are inherently more compatible with masking. The key idea is to use modular arithmetic in neural networks and then efficiently realize masking, in either Boolean or arithmetic fashion, depending on the type of neural network layers. We demonstrate our approach on the edge-computing friendly binarized neural networks (BNN) and show how to modify the training and inference of such a network to work with modular arithmetic without sacrificing accuracy. We then design novel masking gadgets using Domain-Oriented Masking (DOM) to efficiently mask the unique operations of ML such as the activation function and the output layer classification, and we prove their security in the glitch-extended probing model. Finally, we implement fully masked neural networks on an FPGA, quantify that they can achieve a similar latency while reducing the FF and LUT costs over the state-of-the-art protected implementations by 34.2% and 42.6%, respectively, and demonstrate their first-order side-channel security with up to 1M traces.
Sebastian Angel, Andrew J. Blumberg, Eleftherios Ioannidis, Jess Woods
This paper introduces Otti, a general-purpose compiler for SNARKs that provides language-level support for numerical optimization problems. Otti produces efficient arithmetizations of programs that contain optimization problems including linear programming (LP), semi-definite programming (SDP), and a broad class of stochastic gradient descent (SGD) instances. Numerical optimization is a fundamental algorithmic building block: applications include scheduling and resource allocation tasks, approximations to NP-hard problems, and training of neural networks. Otti takes as input arbitrary programs written in a subset of C that contain optimization problems specified via an easy-to-use API. Otti then automatically produces rank-1 constraints satisfiability (R1CS) instances that express a succinct transformation of those pro- grams whose correct execution implies the optimality of the solution to the original optimization problem. Our experimental evaluation on real numerical solver benchmarks used by commercial LP, SDP, and SGD solvers shows that Otti, instantiated with the Spartan proof system, can prove the optimality of solutions in as little as 300 ms---over 4 orders of magnitude faster than existing approaches.
ZhaoCun Zhou, DengGuo Feng, Bin Zhang
Fast correlation attacks, pioneered by Meier and Staffelbach,
is an important cryptanalysis tool for LFSR-based stream cipher, which
exploits the correlation between the LFSR state and key stream and targets at recovering the initial state of LFSR via a decoding algorithm.
In this paper, we develop a vectorial decoding algorithm for fast correlation attack, which is a natural generalization of original binary approach. Our approach benefits from the contributions of all correlations
in a subspace. We propose two novel criterions to improve the iterative decoding algorithm. We also give some cryptographic properties of
the new FCA which allows us to estimate the efficiency and complexity
bounds. Furthermore, we apply this technique to well-analyzed stream
cipher Grain-128a. Based on a hypothesis, an interesting result for its security bound is deduced from the perspective of iterative decoding. Our
analysis reveals the potential vulnerability for LFSRs over generic linear
group and also for nonlinear functions with high SEI multidimensional
linear approximations such as Grain-128a.
Daniel Matyas Perendi , Prosanta Gope
The infamous Enigma machine was believed to be unbreakable before 1932 simply because of its variable settings and incredible complexity. However, people realised that there is a known pattern in the German messages, which then significantly reduced the number of possible settings and made the code breaker's job easier. Modern cryptanalysis techniques provide a lot more powerful way to break the Enigma cipher using letter frequencies and a concept called index of coincidence. In turn, this technique only works well for the English language(using the characters of the English alphabet), but what if we encountered an Enigma machine designed for the Hungarian language, where the alphabet consists of more than 26 characters? Experiments on the Enigma cipher with different languages have not been done to date, hence in this article we show the language's impact on both the machine and the cipher. Not only the Hungarian, but in fact, any language using more characters than the English language could have a significant effect on the Enigma machine and its complexity if there existed one. By a broad comparative analysis, it is proven that the size of the alphabet has a significant impact on the complexity and therefore the cryptanalysis.
Arka Rai Choudhuri, Michele Ciampi, Vipul Goyal, Abhishek Jain, Rafail Ostrovsky
Oblivious transfer (OT) is a foundational primitive within cryptography owing to its connection with secure computation. One of the oldest constructions of oblivious transfer was from certified trapdoor permutations (TDPs). However several decades later, we do not know if a similar construction can be obtained from TDPs in general.
In this work, we study the problem of constructing round optimal oblivious transfer from trapdoor permutations. In particular, we obtain the following new results (in the plain model) relying on TDPs in a black-box manner:
1) Three-round oblivious transfer protocol that guarantees indistinguishability-security against malicious senders (and semi-honest receivers). 2) Four-round oblivious transfer protocol secure against malicious adversaries with black-box simulation-based security. By combining our second result with an already known compiler we obtain the first round-optimal 2-party computation protocol that relies in a black-box way on TDPs. A key technical tool underlying our results is a new primitive we call dual witness encryption (DWE) that may be of independent interest.
In this work, we study the problem of constructing round optimal oblivious transfer from trapdoor permutations. In particular, we obtain the following new results (in the plain model) relying on TDPs in a black-box manner:
1) Three-round oblivious transfer protocol that guarantees indistinguishability-security against malicious senders (and semi-honest receivers). 2) Four-round oblivious transfer protocol secure against malicious adversaries with black-box simulation-based security. By combining our second result with an already known compiler we obtain the first round-optimal 2-party computation protocol that relies in a black-box way on TDPs. A key technical tool underlying our results is a new primitive we call dual witness encryption (DWE) that may be of independent interest.