International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 June 2023

Matteo Campanelli, Nicolas Gailly, Rosario Gennaro, Philipp Jovanovic, Mara Mihali, Justin Thaler
ePrint Report ePrint Report
We present $\mathsf{Testudo}$, a new FFT-less SNARK with a near linear-time prover, constant-time verifier, constant-size proofs and a square-root-size universal setup. $\mathsf{Testudo}$ is based on a variant of Spartan~\cite{C:Setty20}—and hence does not require FFTs—as well as a new, fast multivariate polynomial commitment scheme (PCS) with a square-root-sized trusted setup that is derived from PST (TCC 2013) and IPPs (Asiacrypt 2021). To achieve constant-size SNARK proofs in $\mathsf{Testudo}$ we then combine our PCS openings proofs recursively with a Groth16 SNARK. We also evaluate our construction and its building blocks: to compute a PCS opening proof for a polynomial of size $2^{25}$, our new scheme opening procedure achieves a 110x speed-up compared to PST and 3x compared to Gemini (Eurocrypt 2022), since opening computations are heavily parallelizable and operate on smaller polynomials. Furthermore, a $\mathsf{Testudo}$ proof for a witness of size $2^{30} (\approx 1\,GB)$ requires a setup of size only $2^{15}$ ($\approx$ tens of kilobytes). Finally, we show that a $\mathsf{Testudo}$ variant for proving data-parallel computations is almost 10x faster at verifying $2^{10}$ Poseidon-based Merkle tree opening proofs than the regular version.
Expand
Akram Khalesi, Zahra Ahmadian
ePrint Report ePrint Report
TinyJAMBU is one of the ten finalists of the NIST lightweight cryptography competition, announced in March 2021. It proposes a lightweight authenticated encryption scheme based on a lightweight 128-bit keyed permutation. TinyJAMBU supports three key lengths 128, 192, and 256 denoted by TinyJambu-128, TinyJambu192, and TinyJambu-256, respectively. The scheme as well as the permutation is well studied by the designers and third parties. The most relevant work to ours is the full-round zero-sum distinguisher under the known-key setting assumption published at Indocrypt 2022. In this work, we show that even without the known-key setting assumption, there are integral distinguishers not only for full-round versions of the permutations of TinyJambu-128 and TinyJambu-192 but also for round-increased versions of them up to 1273 rounds.
Expand
Mohammad Hajiabadi, Shahram Khazaei, Behzad Vahdani
ePrint Report ePrint Report
It is well-known that randomness is essential for secure cryptography. The randomness used in cryptographic primitives is not necessarily recoverable even by the party who can, e.g., decrypt or recover the underlying secret/message. Several cryptographic primitives that support randomness recovery have turned out useful in various applications. In this paper, we study randomness recoverable secret sharing schemes (RR-SSS), in both information-theoretic and computational settings and provide two results. First, we show that while every access structure admits a perfect RR-SSS, there are very simple access structures (e.g., in monotone $\mathsf{AC}^0$) that do not admit efficient perfect (or even statistical) RR-SSS. Second, we show that the existence of efficient computational RR-SSS for certain access structures in monotone $\mathsf{AC}^0$ implies the existence of one-way functions. This stands in sharp contrast to (non-RR) SSS schemes for which no such results are known. RR-SSS plays a key role in making advanced attributed-based encryption schemes randomness recoverable, which in turn have applications in the context of designated-verifier non-interactive zero knowledge.
Expand

19 June 2023

Changmin Lee, Seonhong Min, Jinyeong Seo, Yongsoo Song
ePrint Report ePrint Report
Fully Homomorphic Encryption over the Torus (TFHE) is a homomorphic encryption scheme which supports efficient Boolean operations over encrypted bits. TFHE has a unique feature in that the evaluation of each binary gate is followed by a bootstrapping procedure to refresh the noise of a ciphertext. In particular, this gate bootstrapping involves two algorithms called the blind rotation and key-switching.

In this work, we introduce several optimization techniques for the TFHE bootstrapping. We first define a new key distribution, called the block binary distribution, where the secret key can be expressed as a concatenation of several vectors of Hamming weight at most one. We analyze the hardness of (Ring) LWE with a block binary secret and provide candidate parameter sets which are secure against the best-known attacks. Then, we use the block key structure to simplify the inner working of blind rotation and reduce its complexity. We also modify the RLWE key generation and the gadget decomposition method to improve the performance of the key-switching algorithm in terms of complexity and noise growth.

Finally, we use the TFHE library to implement our algorithms and demonstrate their benchmarks. Our experimentation shows that the execution time of TFHE bootstrapping is reduced from 10.5ms down to 6.4ms under the same security level, and the size of the bootstrapping key decreases from 109MB to 60MB.
Expand
Dima Grigoriev, Ilia Ilmer, Alexey Ovchinnikov, Vladimir Shpilrain
ePrint Report ePrint Report
We offer a digital signature scheme using Boolean automorphisms of a multivariate polynomial algebra over integers. Verification part of this scheme is based on the approximation of the number of zeros of a multivariate Boolean function.
Expand
Aviv Yaish, Kaihua Qin, Liyi Zhou, Aviv Zohar, Arthur Gervais
ePrint Report ePrint Report
The expressiveness of Turing-complete blockchains implies that verifying a transaction's validity requires executing it on the current blockchain state. Transaction fees are designed to compensate actors for resources expended on transactions, but can only be charged from transactions included in blocks.

In this work, we show that adversaries can craft malicious transactions that decouple the work imposed on blockchain actors from the compensation offered in return. We introduce three attacks: (i) ConditionalExhaust, the first conditional Resource Exhaustion Attack (REA) against blockchain actors. (ii) MemPurge, an attack for evicting transactions from victims' mempools. (iii) These attack are augmented by GhostTX, the first attack on the reputation system used in Ethereum's Proposer-Builder Separation ecosystem.

We empirically evaluate the attacks on an Ethereum testnet. The worst-case result we find is that by combining ConditionalExhaust and MemPurge, an adversary can simultaneously burden victims' computational resources and clog their mempools, to the point where victims are unable to include transactions in their blocks. Thus, victims create empty blocks, thereby hurting the system's liveness. The expected cost of a one-shot combined attack is $376, but becomes much cheaper if the adversary is a validator. For other attackers, costs decrease if censorship is prevalent in the network.

ConditionalExhaust and MemPurge are made possible by inherent features of Turing-complete blockchains. Potential mitigations may result in reducing a ledger's scalability, an undesirable outcome likely harming its competitiveness.
Expand
Benny Applebaum, Amos Beimel, Yuval Ishai, Eyal Kushilevitz, Tianren Liu, Vinod Vaikuntanathan
ePrint Report ePrint Report
A secret-sharing scheme enables a dealer to share a secret $s$ among $n$ parties such that only authorized subsets of parties, specified by a monotone access structure $f:\{0,1\}^n\to\{0,1\}$, can reconstruct $s$ from their shares. Other subsets of parties learn nothing about $s$.

The question of minimizing the (largest) share size for a given $f$ has been the subject of a large body of work. However, in most existing constructions for general access structures $f$, the share size is not much smaller than the size of some natural computational representation of $f$, a fact that has often been referred to as the ``representation size barrier'' in secret sharing.

In this work, we initiate a systematic study of succinct computational secret sharing (SCSS), where the secrecy requirement is computational and the goal is to substantially beat the representation size barrier. We obtain the following main results.

(1) SCSS via Projective PRGs. We introduce the notion of a *projective PRG*, a pseudorandom generator for which any subset of the output bits can be revealed while keeping the other output bits hidden, using a *short* projective seed. We construct projective PRGs with different levels of succinctness under a variety of computational assumptions, and apply them towards constructing SCSS for graph access structures, monotone CNF formulas, and (less succinctly) useful subclasses of monotone circuits and branching programs. Most notably, under the sub-exponential RSA assumption, we obtain a SCSS scheme that, given an arbitrary access structure $f$, represented by a truth table of size $N=2^n$, produces shares of size polylog(N)=\poly(n) in time $\tilde O(N)$. For comparison, the share size of the best known information-theoretic schemes is $O(N^{0.58})$.

(2) SCSS via One-way Functions. Under the (minimal) assumption that one-way functions exist, we obtain a near-quadratic separation between the total share size of computational and information-theoretic secret sharing. This is the strongest separation one can hope for, given the state of the art in secret sharing lower bounds. We also construct SCSS schemes from one-way functions for useful classes of access structures, including forbidden graphs and monotone DNF formulas. This leads to constructions of fully-decomposable conditional disclosure of secrets (also known as privacy-free garbled circuits) for general functions, represented by a truth table of size $N=2^n$, with share size polylog(N) and computation time $\tilde O(N)$, assuming sub-exponentially secure one-way functions.
Expand
Julian Loss, Gilad Stern
ePrint Report ePrint Report
Studying the feasibility of Byzantine Agreement (BA) in realistic fault models is an important question in the area of distributed computing and cryptography. In this work, we revisit the mixed fault model with Byzantine (malicious) faults and omission faults put forth by Hauser, Maurer, and Zikas (TCC 2009), who showed that BA (and MPC) is possible with $t$ Byzantine faults, $s$ send faults (whose outgoing messages may be dropped) and $r$ receive faults (whose incoming messages may be lost) if $n>3t+r+s$. We generalize their techniques and results by showing that BA is possible if $n>2t+r+s$, given the availability of a cryptographic setup. Our protocol is the first to match the recent lower bound of Eldefrawy, Loss, and Terner (ACNS 2022) for this setting.
Expand
Yibin Yang, Stanislav Peceny, David Heath, Vladimir Kolesnikov
ePrint Report ePrint Report
In MPC, we usually represent programs as circuits. This is a poor fit for programs that use complex control flow, as it is costly to compile control flow to circuits. This motivated prior work to emulate CPUs inside MPC. Emulated CPUs can run complex programs, but they introduce high overhead due to the need to evaluate not just the program, but also the machinery of the CPU, including fetching, decoding, and executing instructions, accessing RAM, etc.

Thus, both circuits and CPU emulation seem a poor fit for general MPC. The former cannot scale to arbitrary programs; the latter incurs high per-operation overhead.

We propose variable instruction set architectures (VISAs), an approach that inherits the best features of both circuits and CPU emulation. Unlike a CPU, a VISA machine repeatedly executes entire program fragments, not individual instructions. By considering larger building blocks, we avoid most of the machinery associated with CPU emulation: we directly handle each fragment as a circuit.

We instantiated a VISA machine via garbled circuits (GC), yielding constant-round 2PC for arbitrary assembly programs. We use improved branching (Stacked Garbling, Heath and Kolesnikov, Crypto 2020) and recent Garbled RAM (GRAM) (Heath et al., Eurocrypt 2022). Composing these securely and efficiently is intricate, and is one of our main contributions.

We implemented our approach and ran it on common programs, including Dijkstra's and Knuth-Morris-Pratt. Our 2PC VISA machine executes assembly instructions at $300$Hz to $4000$Hz, depending on the target program. We significantly outperform the state-of-the-art CPU-based approach (Wang et al., ESORICS 2016, whose tool we re-benchmarked on our setup). We run in constant rounds, use $6\times$ less bandwidth, and run more than $40\times$ faster on a low-latency network. With $50$ms (resp. $100$ms) latency, we are $898\times$ (resp. $1585\times$) faster on the same setup.

While our focus is MPC, the VISA model also benefits CPU-emulation-based Zero-Knowledge proof compilers, such as ZEE and EZEE (Heath et al., Oakland'21 and Yang et al., EuroS&P'22).
Expand
Zvika Brakerski, Stav Medina
ePrint Report ePrint Report
This work addresses the long quest for proving full (adaptive) security for attribute-based encryption (ABE). We show that in order to prove full security in a black-box manner, the scheme must be ``irregular'' in the sense that it is impossible to ``validate'' secret keys to ascertain consistent decryption of ciphertexts. This extends a result of Lewko and Waters (Eurocrypt 2014) that was only applicable to straight-line proofs (without rewinding). Our work, therefore, establishes that it is impossible to circumvent the irregularity property using creative proof techniques, so long as the adversary is used in a black-box manner.

As a consequence, our work provides an explanation as to why some lattice-based ABE schemes cannot be proven fully secure, even though no known adaptive attacks exist.
Expand
Huayi Qi, Minghui Xu, Xiuzhen Cheng, Weifeng Lyu
ePrint Report ePrint Report
Blockchain systems can become overwhelmed by a large number of transactions, leading to increased latency. As a consequence, latency-sensitive users must bid against each other and pay higher fees to ensure that their transactions are processed in priority. However, most of the time of a blockchain system (78% in Ethereum), there is still a lot of unused computational power, with few users sending transactions. To address this issue and reduce latency for users, we propose the latency-first smart contract model in this paper, which optimistically accepts committed transactions. This allows users to submit a commitment during times of high demand, and then complete the rest of the work at a lower priority. From the perspective of the blockchain, this temporarily “overclocks” the system. We have developed a programming tool for our model, and our experiments show that the proposed latency-first smart contract model can greatly reduce latency during the periods of high demand.
Expand
Alain Couvreur, Rocco Mora, Jean-Pierre Tillich
ePrint Report ePrint Report
We bring in here a novel algebraic approach for attacking the McEliece cryptosystem which is right now at the $4$-th round of the NIST competition. It consists in introducing a subspace of matrices representing quadratic forms. Those are associated with quadratic relationships for the component-wise product in the dual of the Goppa (or alternant) code of the cryptosystem. Depending on the characteristic of the field over which the code is defined, this space of matrices consists only of symmetric matrices (odd characteristic) or skew-symmetric matrices (characteristic $2$). It turns out that this matrix space contains unusually low-rank matrices (rank $3$ matrices in odd characteristic, rank $2$ matrices for characteristic $2$) which reveal the secret polynomial structure of the code. Finding such matrices can then be used to recover the secret key of the scheme. We devise a dedicated approach in characteristic $2$ consisting in using a Gröbner basis modeling that a skew-symmetric matrix is of rank $2$. This allows to analyze the complexity of solving the corresponding algebraic system with Gröbner bases techniques. This computation behaves differently when applied to the skew-symmetric matrix space associated with a random code rather than with a Goppa or an alternant code. This gives a distinguisher of the latter code family. We give a bound on its complexity which turns out to interpolate nicely between polynomial and exponential depending on the code parameters. A distinguisher for alternant/Goppa codes was already known [FGOPT11]. It is of polynomial complexity but works only in a narrow parameter regime. This new distinguisher is also polynomial for the parameter regime necessary for [FGOPT11] but contrarily to the previous one is able to operate for virtually all code parameters relevant to cryptography. Moreover, we use this matrix space to find a polynomial time attack of the McEliece cryptosystem provided that the Goppa code is distinguishable by the method of [FGOPT11] and its degree is less than $q-1$, where $q$ is the alphabet size of the code.
Expand
Susil Kumar Bishoi
ePrint Report ePrint Report
The word-oriented feedback shift registers (WFSRs) possess very attractive properties as they take advantage of modern word-based processors and thus increase the throughput. We provide a generalized form of the feedback function of WFSR along with some special cases. Then, a necessary and sufficient condition for nonsingular WFSR is discussed. We study different word-based cascade systems and the period of sequences produced by these cascade systems is derived. We provide experimental results on avalanche property on states of cascade systems and statistical results of sequences produced by them. Finally, we present a crypt-analytic attack on cascade systems and suggest its countermeasure.
Expand
Subhadeep Banik, Francesco Regazzoni
ePrint Report ePrint Report
Möbius transform is a linear circuit used to compute the evaluations of a Boolean function over all points on its input domain. The operation is very useful in finding the solution of a system of polynomial equations over GF (2) for obvious reasons. However the operation, although linear, needs exponential number of logic operations (around $n\cdot 2^{n-1}$ bit xors) for an $n$-variable Boolean function. As such the only known hardware circuit to efficiently compute the Möbius Transform requires silicon area that is exponential in $n$. For Boolean functions whose algebraic degree is bound by some parameter $d$, recursive definitions of the Möbius Transform exist that requires only $O(n^{d+1})$ space in software. However converting the mathematical definition of this space-efficient algorithm into a hardware architecture is a non-trivial task, primarily because the recursion calls notionally lead to a depth-first search in a transition graph that require context switches at each recursion call for which straightforward mapping in hardware is difficult. In this paper we look to overcome these very challenges in an engineering sense. We propose a space efficient sequential hardware circuit for the Möbius Transform that requires only polynomial circuit area (i.e. $O(n^{d+1})$) provided the algebraic degree of the Boolean function is limited to $d$. We show that how this circuit can be used as a component to efficiently solve polynomial equations of degree at most $d$ by using fast exhaustive search. We propose three different circuit architectures for this, each of which used the Möbius Transform circuit as a core component. We show that asymptotically, all the solutions of a system of $m$ polynomials in $n$ unknowns and algebraic degree $d$ over GF (2) can be found using a circuit of silicon area proportional to $m \cdot n^{d+1}$ and physical time proportional to $2\cdot\log_2(n-d)\cdot 2^{n-d}$.
Expand
Joel Gärtner
ePrint Report ePrint Report
A famous reduction by Regev shows that random instances of the Learning With Errors (LWE) problem are asymptotically at least as hard as a worst-case lattice problem. As such, by assuming that standard lattice problems are hard to solve, the asymptotic security of cryptosystems based on the LWE problem is guaranteed. However, it has not been clear to which extent, if any, this reduction provides support for the security of present concrete parametrizations.

In this work we therefore use Regev's reduction to parametrize a cryptosystem, providing a reference as to what parameters are required to actually claim security from this reduction. This requires us to account for the concrete performance of this reduction, allowing the first parametrization of a cryptosystem that is provably secure based only on a conservative hardness estimate for a standard lattice problem. Even though we attempt to optimize the reduction, our system still requires significantly larger parameters than typical LWE-based cryptosystems, highlighting the significant gap between parameters that are used in practice and those for which worst-case reductions actually are applicable.
Expand
Nils Fleischhacker, Kasper Green Larsen, Mark Simkin
ePrint Report ePrint Report
$\newcommand{\gen}{\mathsf{Gen}}\newcommand{\enc}{\mathsf{Enc}}\newcommand{\dec}{\mathsf{Dec}}$ A recent work of Fleischhacker, Larsen, and Simkin (Eurocrypt 2023) shows how to efficiently compress encrypted sparse vectors. Subsequently, Fleischhacker, Larsen, Obremski, and Simkin (Eprint 2023) improve upon their work and provide more efficient constructions solving the same problem. Being able to efficiently compress such vectors is very useful in a variety of applications, such as private information retrieval, searchable encryption, and oblivious message retrieval. Concretely, assume one is given a vector $(m_1, \dots, m_n)$ with at most $t$ distinct indices $i \in [n]$, such that $m_i \neq 0$ and assume $(\gen,\enc, \dec)$ is an additively homomorphic encryption scheme. The authors show that one can compress $(\enc(k, m_1), \dots, \enc(k, m_n))$, without knowing the secret key $k$, into a digest with size dependent on the upper bound on the sparsity $t$, but not on the total vector length $n$. Unfortunately, all existing constructions either only work for encryption schemes that have sufficiently large plaintext spaces or require additional encrypted auxiliary information about the plaintext vector.

In this work, we show how to generalize the results of Fleischhacker, Larsen, and Simkin to encryption schemes with arbitrarily small plaintext spaces. Our construction follows the same general ideas laid out in previous works but modifies them in a way that allows compressing the encrypted vectors correctly with high probability, independently of the size of the plaintext space.
Expand
Lukáš Folwarczný, Mika Göös, Pavel Hubáček, Gilbert Maystre, Weiqiang Yuan
ePrint Report ePrint Report
Simon (1998) proved that it is impossible to construct collision-resistant hash functions from one-way functions using a black-box reduction. It is conjectured more generally that one-way functions do not imply, via a black-box reduction, the hardness of any total NP search problem (collision-resistant hash functions being just one such example). We make progress towards this conjecture by ruling out a large class of “single-query” reductions.

In particular, we improve over the prior work of Hubáček et al. (2020) in two ways: our result is established via a novel simpler combinatorial technique and applies to a broader class of semi black-box reductions.
Expand
Pierre-Antoine Tissot, Lilian Bossuet, Vincent Grosso
ePrint Report ePrint Report
Persistent fault analysis is a novel and efficient cryptanalysis method. The persistent fault attacks take advantage of a persistent fault injected in a non-volatile memory, then present on the device until the reboot of the device. Contrary to classical physical fault injection, where differential analysis can be performed, persistent fault analysis requires new analyses and dedicated countermeasures. Persistent fault analysis requires a persistent fault injected in the S-box such that the bijective characteristic of the permutation function is not present anymore. In particular, the analysis will use the non-uniform distribution of the S-box values: when one of the possible S-box values never appears and one of the possible S-box values appears twice. In this paper, we present the first dedicated protection to prevent persistent fault analysis. This countermeasure, called BALoo for Bijection Assert with Loops, checks the property of bijectivity of the S-box. We show that this countermeasure has a 100% fault coverage for the persistent fault analysis, with a very small software overhead (memory overhead) and reasonable hardware overhead (logical resources, memory and performance). To evaluate the overhead of BALoo, we provide experimental results obtained with the software and the hardware (FPGA) implementations of an AES-128.
Expand
James Hsin-yu Chiang, Bernardo David, Mariana Gama, Christian Janos Lebeda
ePrint Report ePrint Report
In the classical setting of differential privacy, a privacy-preserving query is performed on a private database, after which the query result is released to the analyst; a differentially private query ensures that the presence of a single database entry is protected from the analyst’s view. In this work, we contribute the first definitional framework for differential privacy in the trusted curator setting; clients submit private inputs to the trusted curator which then computes individual outputs privately returned to each client. The adversary is more powerful than the standard setting; it can corrupt up to n − 1 clients and subsequently decide inputs and learn outputs of corrupted parties. In this setting, the adversary also obtains leakage from the honest output that is correlated with a corrupted output. Standard differentially private mechanisms protect client inputs but do not mitigate output correlation leaking arbitrary client information, which can forfeit client privacy completely. We initiate the investigation of a novel notion of correlated-output-differential-privacy to bound the leakage from output correlation in the trusted curator setting. We define the satisfaction of both standard and correlated-output differential privacy as round-differential-privacy and highlight the relevance of this novel privacy notion to all application domains in the trusted curator model.

We explore round-differential-privacy in traditional “dark pool” market venues, which promise privacy-preserving trade execution to mitigate front-running; privately submitted trade orders and trade execution are kept private by the trusted venue operator. We observe that dark pools satisfy neither classic nor correlated-output differential privacy; in markets with low trade activity, the adversary may trivially observe recurring, honest trading patterns, and anticipate and front-run future trades. In response, we present the first round-differentially-private market mechanisms that formally mitigate information leakage from all trading activity of a user. This is achieved with fuzzy order matching, inspired by the standard randomized response mechanism; however, this also introduces a liquidity mismatch as buy and sell orders are not guaranteed to execute pairwise, thereby weakening output correlation; this mismatch is compensated for by a round-differentially-private liquidity provider mechanism, which freezes a noisy amount of assets from the liquidity provider for the duration of a privacy epoch, but leaves trader balances unaffected. We propose oblivious algorithms for realizing our proposed market mechanisms with secure multi-party computation (MPC) and implement these in the Scale-Mamba Framework using Shamir Secret Sharing based MPC. We demonstrate practical, round-differentially-private trading with comparable throughput as prior work implementing (traditional) dark pool algorithms in MPC; our experiments demonstrate practicality for both traditional finance and decentralized finance settings.
Expand
Brett Hemenway Falk, Daniel Noble, Tal Rabin
ePrint Report ePrint Report
This paper presents the first protocols for Proactive Secret Sharing (PSS) that only require constant (in the number of parties, $n$) communication per party per epoch. By harnessing the power of expander graphs, we are able to obtain strong guarantees about the security of the system. We present the following PSS protocols: – A PSS protocol that provides privacy (but no robustness) against an adversary controlling $O(n)$ parties per epoch. – A PSS protocol that provides robustness (but no privacy) against an adversary controlling $O(n)$ parties per epoch. – A PSS protocol that provides privacy against an adversary controlling $O(n^{a})$ parties per epoch and provides robustness against an adversary controlling $O(n^{1−a})$ parties per epoch, for any constant $0 \leq a \leq 1$. Instantiating this with $a = \frac{1}{2}$ gives a PSS protocol that is proactively secure (private and robust) against an adversary controlling $O(\sqrt{n})$ parties per epoch. Additionally, we discuss how secure channels, whose existence is usually assumed by PSS protocols, are challenging to create in the mobile adversary setting, and we present a method to instantiate them from a weaker assumption.
Expand
◄ Previous Next ►