IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
04 May 2021
The Knowledge Hub Universities
Job Posting
The Knowledge Hub Universities (TKH) is a multidisciplinary educational hub in Egypt’s New Administrative Capital. In its state-of-the-art campus, TKH hosts branches of world-class universities, each participating with programmes in their areas of strength, providing an intellectually stimulating, enriching, and global educational experience.
The School of Computing within the Coventry University Branch of the TKH* is looking to recruit Head of School (Faculty of Engineering, Environment and Computing) to be part of this innovative and forward thinking partnership.
Heads of School are expected to have academic credibility and maintain their academic standing and authority through teaching, research and scholarly activities. They will be expected to develop and build a national and international profile for the School and its subjects, furthering growth whilst enhancing the quality of provision.
As Head of School, the post holder is responsible for the strategic direction, academic and discipline/subject leadership and also for the continued development and growth of the School’s portfolio and the responsibilities of the post are inherently of a management nature and therefore the post holder will be an active and engaged member of the Faculty’s leadership team. TKH offers competitive salaries and excellent benefits packages (this includes fully furnished accommodation, international health insurance, children’s tuition, transportation from/to work with TKH buses, and home flights).
If you are interested in applying please click on this link https://careers.tkh.edu.eg/en/egypt/jobs/head-of-school-computing-4286023/ and send your updated CV along with a cover letter addressing how you meet the person specifications and job description to talents.ac@tkh.edu.eg, mentioning the source and position in the subject field.
*Please note: Successful candidates will be employed by The Knowledge Hub Universities (TKH) and not by Coventry University.
Closing date for applications:
Contact: Karim.ghaleb@elsewedyedu.com
More information: https://careers.tkh.edu.eg/en/egypt/jobs/head-of-school-computing-4286023/
Lorenz Panny
ePrint Report
A recent preprint [ePrint 2021/469] suggests the use of exponentiation in a non-associative algebraic structure called "entropoid" to construct post-quantum analogues of DLP-based cryptosystems. In this note, we show a polynomial-time reduction from the entropoid version of DLP to the conventional DLP in the underlying finite field. The resulting attack takes less than 10 minutes on a laptop against parameters suggested in [ePrint 2021/469] for 128-bit post-quantum secure key exchange and runs in polynomial time on a quantum computer. We briefly discuss how to generalize the attack to the generic setting.
StarkWare
ePrint Report
This document is intended to accompany the ethSTARK codebase, describing the computational integrity statement
proved by that code and the specific STARK construction used to prove the statement.
03 May 2021
Abhrajit Sengupta, Nimisha Limaye, Ozgur Sinanoglu
ePrint Report
Logic locking is a prominent solution to protect against design intellectual property theft. However, there has been a decade-long cat-and-mouse game between defenses and attacks. A turning point in logic locking was the development of miter-based Boolean satisfiability (SAT) attack that steered the research in the direction of developing SAT-resilient schemes. These schemes, however achieved SAT resilience at the cost of low output corruption. Recently, cascaded locking (CAS-Lock) was proposed that provides non-trivial output corruption all-the-while maintaining resilience to the SAT attack. Regardless of the theoretical properties, we revisit some of the assumptions made about its implementation, especially about security-unaware synthesis tools, and subsequently expose a set of structural vulnerabilities that can be exploited to break these schemes. We propose our attacks on baseline CAS-Lock as well as mirrored CAS (M-CAS), an improved version of CAS-Lock. We furnish extensive simulation results of our attacks on ISCAS'85 and ITC'99 benchmarks, where we show that CAS-Lock/M-CAS can be broken with ~94% success rate. Further, we open-source all implementation scripts, locked circuits, and attack scripts for the community. Finally, we discuss the pitfalls of point function-based locking techniques including Anti-SAT and Stripped Functionality Logic Locking (SFLL-HD), which suffer from similar implementation issues.
Leo de Castro, Antigoni Polychroniadou
ePrint Report
In this work, we present a lightweight construction of verifiable two-party function secret sharing (FSS) for point functions and multi-point functions. We use these verifiable FSS schemes to construct two-server private information retrieval and private set intersection that are secure \& verifiable in the face of any one malicious corruption.
Our verifiability method is lightweight in two ways. Firstly, it is concretely very efficient, making use of only symmetric key operations and no MPC or linear PCP techniques. For security parameter $\lambda$, our verification procedure is simply to check if two $2\lambda$-bit strings match. Secondly, our verification procedure is essentially unconstrained. It will verify that distributed point function (DPF) shares correspond to some point function irrespective of the output group size, the structure of the DPF output, or the set of points on which the DPF must be evaluated. This is in stark contrast with prior works, which depended on at least one and often all three of these factors. In addition, we give a novel method for packing DPFs into shares of a multi-point function that allows for the number of nonzero points in the multi-point function to grow without growing the evaluation time. We also show how our verification scheme carries over to the multi-point setting. We give an implementation of our verifiable distributed point functions and our verifiable distributed multi-point function.
Our verifiability method is lightweight in two ways. Firstly, it is concretely very efficient, making use of only symmetric key operations and no MPC or linear PCP techniques. For security parameter $\lambda$, our verification procedure is simply to check if two $2\lambda$-bit strings match. Secondly, our verification procedure is essentially unconstrained. It will verify that distributed point function (DPF) shares correspond to some point function irrespective of the output group size, the structure of the DPF output, or the set of points on which the DPF must be evaluated. This is in stark contrast with prior works, which depended on at least one and often all three of these factors. In addition, we give a novel method for packing DPFs into shares of a multi-point function that allows for the number of nonzero points in the multi-point function to grow without growing the evaluation time. We also show how our verification scheme carries over to the multi-point setting. We give an implementation of our verifiable distributed point functions and our verifiable distributed multi-point function.
Joseph Jaeger, Fang Song, Stefano Tessaro
ePrint Report
Should quantum computers become available, they will reduce the effective key length of basic secret-key primitives, such as blockciphers. To address this we will either need to use blockciphers which inherently have longer keys or use key-length extension techniques which employ a blockcipher to construct a more secure blockcipher that uses longer keys.
We consider the latter approach -- in particular, analyzing the security of the FX and double encryption constructions. Classically, these constructs were considered as key-length extension techniques for DES. FX was proven to be a secure key-length extension technique, while double encryption was shown to be no more secure than single encryption due to a meet-in-the-middle attack. In this work we provide positive results, with concrete and tight bounds, for the security of both of these constructions against quantum attackers in ideal models.
For FX, we consider security in the so-called "Q1 model," a natural model in which the attacker has quantum access to the ideal primitive, but only classic access to FX. We provide two partial results for FX in this model. The first establishes the security of FX against non-adaptive attackers. The second establishes security against fully adaptive attackers when considering a variant of FX using a random oracle in place of an ideal cipher. This result relies on the techniques of Zhandry (CRYPTO '19) for lazily sampling a quantum random oracle and are thus hard to extend to the true FX construction because it is currently unknown if a quantum random permutation can be lazily sampled. To the best of our knowledge, this result also is the first to introduce techniques to handle Q1 security in ideal models without analyzing the classical and quantum oracles separately, which may be of broader interest.
For double encryption we apply a technique of Tessaro and Thiruvengadam (TCC '18) to establish that security reduces to the difficulty of solving the list disjointness problem, which we are able to reduce through a chain of results to the known quantum difficulty of the element distinctness problem.
We consider the latter approach -- in particular, analyzing the security of the FX and double encryption constructions. Classically, these constructs were considered as key-length extension techniques for DES. FX was proven to be a secure key-length extension technique, while double encryption was shown to be no more secure than single encryption due to a meet-in-the-middle attack. In this work we provide positive results, with concrete and tight bounds, for the security of both of these constructions against quantum attackers in ideal models.
For FX, we consider security in the so-called "Q1 model," a natural model in which the attacker has quantum access to the ideal primitive, but only classic access to FX. We provide two partial results for FX in this model. The first establishes the security of FX against non-adaptive attackers. The second establishes security against fully adaptive attackers when considering a variant of FX using a random oracle in place of an ideal cipher. This result relies on the techniques of Zhandry (CRYPTO '19) for lazily sampling a quantum random oracle and are thus hard to extend to the true FX construction because it is currently unknown if a quantum random permutation can be lazily sampled. To the best of our knowledge, this result also is the first to introduce techniques to handle Q1 security in ideal models without analyzing the classical and quantum oracles separately, which may be of broader interest.
For double encryption we apply a technique of Tessaro and Thiruvengadam (TCC '18) to establish that security reduces to the difficulty of solving the list disjointness problem, which we are able to reduce through a chain of results to the known quantum difficulty of the element distinctness problem.
Itai Dinur
ePrint Report
At SODA 2017 Lokshtanov et al. presented the first worst-case algorithms with exponential speedup over exhaustive search for solving polynomial equation systems of degree $d$ in $n$ variables over finite fields. These algorithms were based on the polynomial method in circuit complexity which is a technique for proving circuit lower bounds that has recently been applied in algorithm design. Subsequent works further improved the asymptotic complexity of polynomial method-based algorithms for solving equations over the field $\mathbb{F}_2$. However, the asymptotic complexity formulas of these algorithms hide significant low-order terms, and hence they outperform exhaustive search only for very large values of $n$.
In this paper, we devise a concretely efficient polynomial method-based algorithm for solving multivariate equation systems over $\mathbb{F}_2$. We analyze our algorithm's performance for solving random equation systems, and bound its complexity by about $n^2 \cdot 2^{0.815n}$ bit operations for $d = 2$ and $n^2 \cdot 2^{\left(1 - 1/2.7d\right) n}$ for any $d \geq 2$.
We apply our algorithm in cryptanalysis of recently proposed instances of the Picnic signature scheme (an alternate third-round candidate in NIST's post-quantum standardization project) that are based on the security of the LowMC block cipher. Consequently, we show that 2 out of 3 new instances do not achieve their claimed security level. As a secondary application, we also improve the best-known preimage attacks on several round-reduced variants of the Keccak hash function.
Our algorithm combines various techniques used in previous polynomial method-based algorithms with new optimizations, some of which exploit randomness assumptions about the system of equations. In its cryptanalytic application to Picnic, we demonstrate how to further optimize the algorithm for solving structured equation systems that are constructed from specific cryptosystems.
In this paper, we devise a concretely efficient polynomial method-based algorithm for solving multivariate equation systems over $\mathbb{F}_2$. We analyze our algorithm's performance for solving random equation systems, and bound its complexity by about $n^2 \cdot 2^{0.815n}$ bit operations for $d = 2$ and $n^2 \cdot 2^{\left(1 - 1/2.7d\right) n}$ for any $d \geq 2$.
We apply our algorithm in cryptanalysis of recently proposed instances of the Picnic signature scheme (an alternate third-round candidate in NIST's post-quantum standardization project) that are based on the security of the LowMC block cipher. Consequently, we show that 2 out of 3 new instances do not achieve their claimed security level. As a secondary application, we also improve the best-known preimage attacks on several round-reduced variants of the Keccak hash function.
Our algorithm combines various techniques used in previous polynomial method-based algorithms with new optimizations, some of which exploit randomness assumptions about the system of equations. In its cryptanalytic application to Picnic, we demonstrate how to further optimize the algorithm for solving structured equation systems that are constructed from specific cryptosystems.
Dionysis Zindros
ePrint Report
Macroeconomic policy in a blockchain system concerns the algorithm that decides the payment schedule for miners and thus its money mint rate. It governs the amounts, distributions, beneficiaries and conditions required for money supply payments to participants by the system. While most chains today employ simple policies such as a constant amount per block, several cryptocurrencies have sprung up that put forth more interesting policies. As blockchains become a more popular form of money, these policies inevitably are becoming more complex. A chain with a simple policy will often need to switch over to a different policy. Until now, it was believed that such upgrades require a hard fork -- after all, they are changing the money supply, a central part of the system, and unupgraded miners cannot validate blocks that deviate from those hard-coded rules. In this paper, we present a mechanism that allows a chain to upgrade from one policy to another through a soft fork. Our proposed mechanism works in today's Ethereum blockchain without any changes and can support a very generic class of monetary policies that satisfy a few basic bounds. Our construction is presented in the form of a smart contract. We showcase the usefulness of our proposal by describing several interesting applications of policy changes. Notably, we put forth a mechanism that makes Non-Interactive Proofs of Proof-of-Work unbribable, a previously open problem.
Surya Addanki, Kevin Garbe, Eli Jaffe, Rafail Ostrovsky, Antigoni Polychroniadou
ePrint Report
This paper introduces Prio+, a privacy-preserving system for the collection of aggregate statistics, with the same model and goals in mind as the original and highly influential Prio paper by Henry Corrigan-Gibbs and Dan Boneh (USENIX 2017). As in the original Prio, each client holds a private data value (e.g. number of visits to a particular website) and a small set of servers privately compute statistical functions over the set of client values (e.g. the average number of visits). To achieve security against faulty or malicious clients, Prio+ clients use Boolean secret-sharing instead of zero-knowledge proofs to convince servers that their data is of the correct form and Prio+ servers execute a share conversion protocols as needed in order to properly compute over client data. This allows us to ensure that clients data is properly formatted essentially for free, and the work shifts to novel share-conversion protocols between servers, where some care is needed to make it efficient. While our overall approach is a fairly simple observation in retrospect, it turns out that Prio+ strategy reduces the clients computational burden by up to two orders of magnitude (or more depending on the statistic) while keeping servers costs comparable to Prio. Prio+ permits computation of exactly the same wide range of complex statistics as the original Prio protocol, including high-dimensional linear regression over private values held by clients. We report detailed benchmarks of our Prio+ implementation and compare these to both the original Go implementation of Prio and the Mozilla implementation of Prio. Our Prio+ software is open-source and released with the same license as Prio.
Zhenzhen Bao, Jian Guo, Danping Shi, Yi Tu
ePrint Report
Since the Meet-in-the-Middle preimage attack against 7-round AES hashing was found by Sasaki in 2011, the development of this research direction has never been stopped. In 2019, Bao et al. found the degree of freedom from the message (or the key of the underlying block cipher) were useful, before the Mixed-Integer-Linear-Programming (MILP) modeling was introduced to find the optimal attack configurations in 2020. In this paper, we move one step further in this research direction by introducing more techniques such as guess-and-determine, round independence, and symmetry etc. to the MILP search model. To demonstrate the power of the enhanced model, we apply it to the popular AES-like hash functions Whirlpool, Grøstl, and AES hashing modes, and obtain general improvements over the existing best (pseudo-)preimage attacks. In particular, the number of attacked rounds on Whirlpool and AES-256 hashing modes is extended from 6 to 7 and 9 to 10, respectively. Time complexity improvements are also obtained on variants of lesser rounds, as well as the 6-round Grøstl-256 and the 8-round Grøstl-512. Computer experiments on trial versions of the full attack procedure have confirmed the correctness of our results.
Yuyin Yu, Leo Perrin
ePrint Report
We found 5412 new quadartic APN on F28 with the QAM method, thus bringing the number of known CCZ-inequivalent APN functions on F28 to 26525. Unfortunately, none of these new functions are CCZ-equivalent to permutations. A (to the best of our knowledge) complete list of known quadratic APN functions, including our new ones, has been pushed to sboxU for ease of study by others.
In this paper, we recall how to construct new QAMs from a known one, and present how used the ortho-derivative method to figure out which of our new functions fall into different CCZ-classes. Based on these results and on others on smaller fields, we make to conjectures: that the full list of quadratic APN functions on F28 could be obtained using the QAM approached (provided enormous computing power), and that the total number of CCZ-inequivalent APN functions may overcome 50000.
In this paper, we recall how to construct new QAMs from a known one, and present how used the ortho-derivative method to figure out which of our new functions fall into different CCZ-classes. Based on these results and on others on smaller fields, we make to conjectures: that the full list of quadratic APN functions on F28 could be obtained using the QAM approached (provided enormous computing power), and that the total number of CCZ-inequivalent APN functions may overcome 50000.
Elena Andreeva, Rishiraj Bhattacharyya, Arnab Roy
ePrint Report
We revisit the classical problem of designing optimally efficient cryptographically secure hash functions. Hash functions are traditionally designed via applying modes of operation on primitives with smaller domains. The results of Shrimpton and Stam (ICALP 2008), Rogaway and Steinberger (CRYPTO 2008), and Mennink and Preneel (CRYPTO 2012) show how to achieve optimally efficient designs of $2n$-to-$n$-bit compression functions from non-compressing primitives with asymptotically optimal $2^{n/2-\epsilon}$-query collision resistance. Designing optimally efficient and secure hash functions for larger domains ($> 2n$ bits) is still an open problem.
To enable efficiency analysis and comparison across hash functions built from primitives of different domain sizes, in this work we propose the new \textit{compactness} efficiency notion. It allows us to focus on asymptotically optimally collision resistant hash function and normalize their parameters based on Stam's bound from CRYPTO 2008 to obtain maximal efficiency.
We then present two tree-based modes of operation as a design principle for compact, large domain, fixed-input-length hash functions. \begin{enumerate} \item Our first construction is an \underline{A}ugmented \underline{B}inary T\underline{r}ee (\cmt) mode. The design is a $(2^{\ell}+2^{\ell-1} -1)n$-to-$n$-bit hash function making a total of $(2^{\ell}-1)$ calls to $2n$-to-$n$-bit compression functions for any $\ell\geq 2$. Our construction is optimally compact with asymptotically (optimal) $2^{n/2-\epsilon}$-query collision resistance in the ideal model. For a tree of height $\ell$, in comparison with Merkle tree, the $\cmt$ mode processes additional $(2^{\ell-1}-1)$ data blocks making the same number of internal compression function calls. \item With our second design we focus our attention on the indifferentiability security notion. While the $\cmt$ mode achieves collision resistance, it fails to achieve indifferentiability from a random oracle within $2^{n/3}$ queries. $\cmt^{+}$ compresses only $1$ less data block than $\cmt$ with the same number of compression calls and achieves in addition indifferentiability up to $2^{n/2-\epsilon}$ queries. \end{enumerate} Both of our designs are closely related to the ubiquitous Merkle Trees and have the potential for real-world applicability where the speed of hashing is of primary interest.
To enable efficiency analysis and comparison across hash functions built from primitives of different domain sizes, in this work we propose the new \textit{compactness} efficiency notion. It allows us to focus on asymptotically optimally collision resistant hash function and normalize their parameters based on Stam's bound from CRYPTO 2008 to obtain maximal efficiency.
We then present two tree-based modes of operation as a design principle for compact, large domain, fixed-input-length hash functions. \begin{enumerate} \item Our first construction is an \underline{A}ugmented \underline{B}inary T\underline{r}ee (\cmt) mode. The design is a $(2^{\ell}+2^{\ell-1} -1)n$-to-$n$-bit hash function making a total of $(2^{\ell}-1)$ calls to $2n$-to-$n$-bit compression functions for any $\ell\geq 2$. Our construction is optimally compact with asymptotically (optimal) $2^{n/2-\epsilon}$-query collision resistance in the ideal model. For a tree of height $\ell$, in comparison with Merkle tree, the $\cmt$ mode processes additional $(2^{\ell-1}-1)$ data blocks making the same number of internal compression function calls. \item With our second design we focus our attention on the indifferentiability security notion. While the $\cmt$ mode achieves collision resistance, it fails to achieve indifferentiability from a random oracle within $2^{n/3}$ queries. $\cmt^{+}$ compresses only $1$ less data block than $\cmt$ with the same number of compression calls and achieves in addition indifferentiability up to $2^{n/2-\epsilon}$ queries. \end{enumerate} Both of our designs are closely related to the ubiquitous Merkle Trees and have the potential for real-world applicability where the speed of hashing is of primary interest.
Charanjit Singh Jutla, Nathan Manohar
ePrint Report
While it is well known that the sawtooth function has a point-wise convergent
Fourier series, the rate of convergence is not the
best possible for the application of approximating the mod function
in small intervals around multiples of the modulus.
We show a different sine series, such that the
sine series of order n has error O(epsilon^(2n+1)) for approximating
the mod function in epsilon-sized intervals around multiples of the modulus.
Moreover, the resulting polynomial, after Taylor series approximation of the
sine series, has small coefficients, and the whole polynomial can be computed
at a precision that is only slightly larger than
-(2n+1)log epsilon, the precision of approximation being sought. This polynomial can then be used to approximate the mod function to almost arbitrary precision,
and hence allows practical CKKS-HE bootstrapping with arbitrary precision.
Thomas Attema, Nicole Gervasoni, Michiel Marcus, Gabriele Spini
ePrint Report
The advent of a full-scale quantum computer will severely impact most currently-used cryptographic systems. The most well-known aspect of this impact lies in the computational-hardness assumptions that underpin the security of most current public-key cryptographic systems: a quantum computer can factor integers and compute discrete logarithms in polynomial time, thereby breaking systems based on these problems.
However, simply replacing these problems by other which are (believed to be) impervious even to a quantum computer does not completely solve the issue. Indeed, many security proofs of cryptographic systems are no longer valid in the presence of a quantum-capable attacker; while this does not automatically implies that the affected systems would be broken by a quantum computer, it does raises questions on the exact security guarantees that they can provide.
This overview document aims to analyze all aspects of the impact of quantum computers on cryptographic, by providing an overview of current quantum-hard computational problems (and cryptographic systems based on them), and by presenting the security proofs that are affected by quantum-attackers, detailing what is the current status of research on the topic and what the expected effects on security are.
André Chailloux, Johanna Loyer
ePrint Report
Lattice-based cryptography is one of the leading proposals for post-quantum cryptography. The Shortest Vector Problem (SVP) is arguably the most important problem for the cryptanalysis of lattice-based cryptography, and many lattice-based schemes have security claims based on its hardness. The best quantum algorithm for the SVP is due to Laarhoven [Laa16 PhD] and runs in (heuristic) time $2^{0.2653d + o(d)}$. In this article, we present an improvement over Laarhoven's result and present an algorithm that has a (heuristic) running time of $2^{0.2570 d + o(d)}$ where $d$ is the lattice dimension. We also present time-memory trade-offs where we quantify the amount of quantum memory and quantum random access memory of our algorithm. The core idea is to replace Grover's algorithm used in [Laa16 PhD] in a key part of the sieving algorithm by a quantum random walk in which we add a layer of local sensitive filtering.
David Knichel, Amir Moradi, Nicolai Müller, Pascal Sasdrich
ePrint Report
Masking has been recognized as a sound and secure countermeasure for cryptographic implementations, protecting against physical side-channel attacks. Even though many different masking schemes have been presented over time, design and implementation of protected cryptographic Integrated Circuits (ICs) remains a challenging task. More specifically, correct and efficient implementation usually requires manual interactions accompanied by longstanding experience in hardware design and physical security. To this end, design and implementation of masked hardware often proves to be an error-prone task for engineers and practitioners. As a result, our novel tool for automated generation of masked hardware (AGEMA) allows even inexperienced engineers and hardware designers to create secure and efficient masked cryptograhic circuits originating from an unprotected design. More precisely, exploiting the concepts of Probe-Isolating Non-Interference (PINI) for secure composition of masked circuits, our tool provides various processing techniques to transform an unprotected design into a secure one, eventually accelerating and safeguarding the process of masking cryptographic hardware. Ultimately, we evaluate our tool in several case studies, emphasizing different trade-offs for the transformation techniques with respect to common performance metrics, such as latency, area, and randomness.
Gaurav Panwar, Roopa Vishwanathan, Satyajayant Misra
ePrint Report
In this paper, we study efficient and authorized rewriting of transactions already written to a blockchain. Mutable transactions will make a fraction of all blockchain transactions, but will be a necessity to meet the needs of privacy regulations, such as the General Data Protection Regulation (GDPR). The state-of-the-art rewriting approaches have several shortcomings, such as lack of user anonymity, inefficiency, and absence of revocation mechanisms. We present ReTRACe, an efficient framework for blockchain rewrites. ReTRACe is designed by composing a novel revocable chameleon hash with ephemeral trapdoor scheme, a novel revocable fast attribute based encryption scheme, and a dynamic group signature scheme. We discuss ReTRACe, and its constituent primitives in detail, along with their security analyses, and present experimental results to demonstrate the scalability of ReTRACe.
Jeonghyuk Lee, Jihye Kim, Hyunok Oh
ePrint Report
As a solution to mitigate the key exposure problems in the digital signature, forward security has been proposed. The forward security guarantees the integrity of the messages generated in the past despite leaks of a current time period secret key by evolving a secret key on each time period. However, there is no forward secure signature scheme whose all metrics have constant complexities. Furthermore, existing works do not support multi-user aggregation of signatures.
In this paper, we propose a forward secure aggregate signature scheme utilizing recursive zk-SNARKs (zero knowledge Succinct Non-interactive ARguments of Knowledge), whose all metrics including size and time have $O(1)$. The proposed forward secure signature scheme can aggregate signatures generated by not only a single user but also multiple users. The security of the proposed scheme is formally proven under zero-knowledge assumption and random oracle model.
Cong Zhang, Hong-Sheng Zhou
ePrint Report
We investigate the digital signature schemes in the indifferentiability framework. We show that the well-known Lamport one-time signature scheme and the tree-based signature scheme can be ``lifted'' to realize ideal one-time signature, and ideal signature, respectively, without using computational assumptions. We for the first time show that the ideal signatures, ideal one-time signatures, and random oracles are equivalent in the framework of indifferentiability.
Cyprien Delpech de Saint Guilhem, Eleftheria Makri, Dragos Rotaru, Titouan Tanguy
ePrint Report
Secure multiparty generation of an RSA biprime is a challenging task, which increasingly receives attention, due to the numerous privacy-preserving applications that require it. In this work, we construct a new protocol for the RSA biprime generation task, secure against a malicious adversary, who can corrupt any subset of protocol participants. Our protocol is designed for generic MPC, making it both platform-independent and allowing for weaker security models to be assumed (e.g., honest majority), should the application scenario require it. By carefully ``postponing" the check of possible inconsistencies in the shares provided by malicious adversaries, we achieve noteworthy efficiency improvements. Concretely, we are able to produce additive sharings of the prime candidates, from multiplicative sharings via a semi-honest multiplication, without degrading the overall (active) security of our protocol. This is the core of our sieving technique, increasing the probability of our protocol sampling a biprime. Similarly, we perform the first biprimality test, requiring several repetitions, without checking input share consistency, and perform the more costly consistency check only in case of success of the Jacobi symbol based biprimality test. Moreover, we propose a protocol to convert an additive sharing over a ring, into an additive sharing over the integers. Besides being a necessary sub-protocol for the RSA biprime generation, this conversion protocol is of independent interest. The cost analysis of our protocol demonstrated that our approach improves the current state-of-the-art (Chen et al. -- Crypto 2020), in terms of communication efficiency. Concretely, for the two-party case with malicious security, and primes of 2048 bits, our protocol improves communication by a factor of ~37.